Linux Firewalls |
An example of a UDP protocol is Domain Name Service (DNS). DNS implementes a distributed database used to translate between IP addresses and hostnames. Additional services include Start Of Authority (SOA) records that indicate what portions of the name space a server is responsible for giving definitive answers for, Mail eXchange (MX) records that provide referrals to mail servers for domain names, additional information about hosts and networks, and a wide range of other less important and optional content.
DNS runs primarily over UDP and requests are answered with (1) a known response, (2) a referral to another server, or (3) no answer at all. The third response is due to the nature of UDP services. The distributed database provides a tree structured name space that is the basis for the hostnames commonly used in the Internet.
The example shown here begins at the InterNIC, the theoretical root of the domain name space. There are about 14 root servers in the Internet, and these servers provide referals that cross Top Level Domain (TLD) name spaces. For example, '.com', '.net', '.edu', and '.org' are all TLDs.
Since UDP is the protocol used in DNS lookups, DNS replies are very easy to forge. All that has to be done is to generate a valid datagram with the source IP address of a DNS server in response to a request, and it will be trusted as accurate. The first response that appears is typically trusted, so a simple program that provides a rapid but wrong response is often effective at disrupting DNS service.
The nomral DNS lookup process starts with a lookup using the local DNS server. If the local server doesn't know the answer, it can either respond indicating that it does not know or it might look the answer up for you and return the answer when it knows it. If it simply says it doesn't know the answer, your resolver is then supposed to move up the DNS tree and ask the next higher DNS if it knows the answer. If the DNS looks it up on its own, the process is the same except that the DNS does the work through its resolver instead of through yours.
For example, if the DNS for st.com doesn't know the answer to an inquiry about all.net, the request then goes to the .com TLD followed by the root servers. The root servers refer the process to the .net TLD which go to the authoritative server for all.net which supplies the answer. Any or all of these servers may cache previous responses to save time on subsequent lookups.
Because the DNS is a tree structure, it is possible that within st.com there could be a domain names all.net.st.com. If this is the case, then looking up all.net from within st.com will typically yield all.net.st.com instead of all.net! In the example below, a forged DNS response makes all.net inaccessible from st.com and leads internal users to be sent to the internal site when all.net is requested.
The normal behavior of a White Glove CD is actually an interesting example of this. If you configure your system for networking and try to go to 'www.google.com', you will get the well known search engine. But if you try to simply enter 'google' you will find yourself conected to google.all.net. This is because in the '/etc/resolv.conf' file, the resolution process defaults to the all.net domain. The all.net domain servers in turn use a default response to any request that is otherwise unknown that brings them to the all.net web site. In fact, if you try any domain that doesn't otherwise resolve, you will be directed to the all.net web site.
The caching behavior of DNS servers also creates a major potential problem in that a forged response can induce servers at high levels to provide wrong responses over an extended period of time. This can be done by a process called cache poisoning in which the cache of servers is poisoned by false responses, typically generated along with the requests. As an example, the attacker might forge both the requests and the responses so that the details of the port numbers are available. Start by sending a request that comes from the root servers to your own DNS server, and from that get the source port number of the request from the root server. Then, make a request for all.net and generate a series of responses using destination port numbers for the root servers starting one higher thatn the port number on the request to your DNS. It is very likely that you can generate hundreds of responses before the request ever hits all.net's DNS server and beat all.net to the punch. By using a high Time To Live (TTL) field in the DNS response, you can cause the root servers to give wrong answers for a long time - perhaps weeks.
Another attack can slow down services, map out a network, and possibly crash a DNS server by simply making a lot of requests. In this case, a flood of DNS requests are made using a large number of domain names. These names cause the server to do lots of lookups, multiplying your requests by a factor of 2 to 3, thus flooding service. In addition, these requests cause local cached DNS entries to be flushed from the cache, or in some cases, to overrun local memory and crash the computer. In one case the DNS service becomes very slow, while in the other it stops entirely. The same process generates a mapping of all of the domain names and their IP addresess. For example, you can go through all of the IP addresses in an IP address range requesting all of the hostnames, thus getting an initial picture of what the distant network looks like.
Many of these DNS problems cannot be solved by firewalls, however, there are some that can. One way to solve these problems it to limit the number of DNS servers available to your users by requiring that DNS only be allowed to known good DNS servers that themselves look up other doman names and cache the results. This prevents forgers from fooling your systems, unless they are also fooling other systems that are trusted by many other Internet users. Suppose we have a list of known good caching DNS servers. We then tell our users to configure for only those servers and provide a firewall rule like this:
wg:root /root> ipchains -I input -s 0.0.0.0/0 53 -p udp -j DENY
wg:root /root: ipchains -I input -d 0.0.0.0/0 53 -p udp -j DENY
wg:root /root> for i in 5.6.7.8 9.0.1.2 3.4.5.6; do
wg:root /root: ipchains -I input -s 1.2.3.0/24 1024-65535 -d $i 53 -p udp -i eth1 -j ACCEPT
wg:root /root: ipchains -I output -s 1.2.3.0/24 1024-65535 -d $i 53 -p udp -i eth0 -j ACCEPT
wg:root /root: ipchains -I input -s $i 53 -d 1.2.3.0/24 1024-65535 -p udp -i eth0 -j ACCEPT
wg:root /root: ipchains -I output -s $i 53 -d 1.2.3.0/24 1024-65535 -p udp -i eth1 -j ACCEPT
wg:root /root: done
We start by inserting rules so that no traffic can flow into or out of either interface if it uses port 53 and is a UDP packet. Then we insert rules before these two rules that go through each of the list of valid DNS servers and allow input from and output to them on each interface so that packets from high ports on internal computers can flow to port 53 on these DNS servers and traffic from port 53 on those DNS servers can flow to high ports on internal machines, all only with UDP service. This generates 4 rules per external DNS server, or in this case 12 rules.
The resulting ordering is important and that is why rule insertion was used in each case. Since these are quite specific rules and we don't want some other rule that comes between or before them to allow an access that they deny, we must insert them at the beginning of the rule sequence. On the other hand, a subsequent rule that is never reached might allow all traffic from or to some other server. The DENY rules will prevent any DNS traffic from flowing even through the subsequent rule would otherwise allow it. They are inserted each before the last, so that the last rule inserted comes first on the list.
To test these rules, we set up a two 'tcpdump' sessions on the firewall computer and try sending DNS requests outbound from internal computers toward the identified IP addresses. This is very easily done with 'netcat' as follows:
wg:root /root> ifconfig eth0 1.2.3.4
wg:root /root> for di in 1.1.1.1 5.6.7.8 9.0.1.2 3.4.5.6; do
wg:root /root: for dp in 52 53 54; do
wg:root /root: for si in 1.2.3.4 1.2.10.1; do
wg:root /root: for sp in 1023 1024 1025; do
wg:root /root: nc -u -nzvv -s $si -p $sp $di $dp
wg:root /root: done;done;done;done
In this case we are sending a series of 72 different DNS attempts, 66 of which should fail and only 6 of which should succeed (from the valid IP address and each of the high ports to the valid DNS servers on port 53). We can see which ones get through the firewall and which do not by watching the two screens on the firewall box. Some that will not do DNS service may get through the firewall, but that is not what we are trying to block with these rules. Verify that the proper datagrams and only those datagrams flow through the firewall. If there is additional traffic on the firewall, use additional parameters to 'tcpdump' so as to only see the appropraite traffic.
Also note that this will not prevent some of the DNS attacks listed above, and is thus only of limited value in solving the problems of DNS.
In addition to the basic lack of sequencing and authentication in UDP, individual services that run over UDP have some serious historical problems associated with them.
time server: This shows the local time, uptime, and other factors relating to the remote computer that can sometimes be exploited to find out information helpful in an attack. To disable, prevent access to port 13.
boot server: This is a server that provides for remote system bootup. It can be used to find out alot about a system including details of the operating environment (for example by requesting it to boot your computer) and the presence of this service means that by forging a bootserver and beating the normal boot server to responses, you may be able to load your operating sytem into the computers in the network under attack. To disable, prevent access to port 67 and 68.
Network Time Protocol: This protocol is used to synchronize time across the Internet, typically to within less than a second. Interestingly, vulnerabilities in some servers provide the means for it to be exploited to gain remote access. To disable, prevent access to port 123
NetBIOS service (PC Support): This is the service that supports all of the Microsoft services. Since al lservices run through these ports you must either choose to block the whole set of protocols or allow them all to pass if all you have is normal port filtering. To disable, prevent access to ports 137, 138, and 139.
tftp: The Trivial File Transfer protocol is used for moving frils from place to place. It has no authentication and if it is running commonly reveals things like the system password file. To disable, prevent access to ports 69 and 1758.
NFS: Network File Sharing is very convenient for allowing a file server to hold files accessed by users throughout a network, however, it is also exploitable in most implementations. It allows an attacker to create, modify, or delete files, and introduce viruses or Trojan Horses into the server and the computers that get files from it. To disable, prevent access to port 2049.
NIS: Network Information Services provides shared access top a common password file so that centralized password control can be used. It can also be exploited to gain remote access without a password or to gain lists of user IDs and passwords or password hashes that can be used with a passwrod guessing program to find passwords.
User defined: A user can define any service they want on any port higher than 1024 and grant remote access with all of their privileges. To disable, prevent access to all other ports.
Students in classes should pick one of these services as an example and create an appropriate set of firewall rules to protect against attacks such as those described for it. Then implement those rules and create and run a limited test program that tests these rules to make certain they operate as desired.
Many of the most desired and vulnerable services operate over TCP. Some of them are listed here with brief descriptions:
telnet: Remote terminal-like access can be easily sniffed and sessions taken over by an attacker. To disable, prevent access to port 23.
SMTP: SMTP service is typically provided by the 'sendmail' program that has historically had multiple vulnerabilities causing remote superuser access to systems. To disable, prevent access to port 25.
whois: This service provides details on users on the system, when they come and go, whether they are locally or remotely logged in, and so forth. It can sometimes be exploited to find out information helpful in an attack. To disable, prevent access to port 43.
hostnames: This serviceprovides details on the hostname and this can sometimes be exploited to find out information helpful in an attack. To disable, prevent access to port 101.
UUCP Path Lookup: This is an older mail protocol (Unix to Unix CoPy) that provides detailed information on the paths through computers in internal networks. To disable, prevent access to port 117 and 540
Network News Transfer Protocol: This protocol is used for sending news around and typically includes topology information about netwoprk configurations. To disable, prevent access to port 119 and 563
Simple Network Management Protocol: This protocol is used to manage network routers and similar devices and to update other computers on the fastest routes from place to place. IT has been accidentally used to take down large portions of the Internet for hours at a time, and in malicious hands could be used much more consequentially. To disable, prevent access to port 161.
rje: This is used for Remote Job Entry, so that a remote user can submit processing to be done on a local computer. If letf operational it permits a remote user to gain access to your computer. To disable, prevent access to port 5.
finger: This provides information on the users on the system, when they login and logout, who is currently using the computer, and so forth. It is helpful in gathering intelligence for an attack. To disable, prevent access to port 79.
exec: This permits a user on another computer to run programs on your computer if they have the same user ID. Since is its simple to set the user ID to anything you want if you are the superuser, this can be quite dangerous. To disable, prevent access to port 512
login: This permits a user on another computer to login to your computer if they have the same user ID, or with a password if they do not have the same user ID. Since is its simple to set the user ID to anything you want if you are the superuser, this can be quite dangerous. To disable, prevent access to port 513.
shell: This permits a user on another computer to run the shell on your computer if they have the same user ID. Since is its simple to set the user ID to anything you want if you are the superuser, this can be quite dangerous. To disable, prevent access to port 514
printer: This permits remote users to print things to your printer. This has been used to run the printer out of paper, printing one character per page, but can also sometimes be exploited to gain remote access to systems or to redirect existing prints to a different computer. To disable, prevent access to port 515.
whod: This allows you to ask information about tho is on the remote computer. It is helpful in gathering intelligence for an attack. To disable, prevent access to port 513.
X server: The X server is what allows X11 to operate on the computer. If remote X11 access is availabole, users can observe or modify what is on the graphical interface and enter remote commands into your computer. To disable, prevent access to ports 6000-6025.
http: The is the web service port. It allows web services to pass back and forth and is commonly exploited because of errors in Common Gateway Interface (CGI) scripts that allow remote users to gain privileged server access, configuration errors that allow otherwise protected files to be examined, modified, deleted, or added (sometimes leading to remote access or the introduction of Trojan Horse programs), management interfaces that allow remote users to take control over systems under management, content-based attacks that exploit general purpose functionality or poorly designed browser code to gain access to the computer running it, and downloaded software with Trojan Horse code in it, such as ActiveX scripts with attacker code and Word viruses. To disable, prevent access to port 80.
Talkd: A Trojan Horse in one talk daemon granted remote access to 10,000 or more systems. To disable, prevent access to port 517.
swat: The Samba Web Administration Tool allows you to control access to services used with microsoft networking run on Linux servers. To disable, prevent access to port 901.
User defined: A user can define any service they want on any port higher than 1024 and grant remote access with all of their privileges. To disable, prevent access to all other ports.
This list is far from comprehensive, it is merely representative of the sorts of things you are likely to encounter when trying to determnine how to firewall your systems. If you cut off all services, you will bve relatively safer from remote attack, but you will not gain the benefits of the Internet either. If you leave too many services on, you will be attacked often and pay a high price in defending your systems.
All of the defenses described up to this point based on TCP wrappers make assumptions about the associations of ports with services, but these assumptions are just that and nothing more. In fact, any packets that can flow in and out of a network can be used to implement any protocol desired by those who are able to generate and observe those packets. I use the term packets here instead of datagrams for a reason. Datagrams are those sequences of protocol elements defined for IP. In other words, packets that follow the rules for datagrams. But we can put anything we want into packets and for the most part, the Internet will transport them from place to place as they are. The most common techniques are tunneling, staganographic content, and protocol fudging. All three assume an inside system is cooperating with the attack either intentionally or unwittingly.
Tunnelling:
Steganography:
Protocol Fudging:
Packet filter configuration is complex to do right. It is usually an ordered sequence of rules, the first one to trigger is used. For example, (1) allow all hosts port 25 source into all hosts, (2) deny all '.edu' computers on all ports into all hosts, (3) allow all hosts port 113 into subnet-2, etc. Most have adequate tools to create them, but they are often hard to understand.
Prevent inbound telenet from all but isi.edu, allow outbound telent, deny everything else:
Source IP | Source Port | Dest IP | Dest Port | What |
---|---|---|---|---|
*.isi.edu | >1023 | Local | 23 | Allow |
All | All | Local | 23 | Deny |
Local | 23 | *.isi.edu | >1023 | Allow |
Local | >1023 | All | 23 | Allow |
All | 23 | Local | >1023 | Allow |
All | All | All | All | Deny |
Allow World Wide Web (http), deny everything else.
Source IP | Source Port | Dest IP | Dest Port | What |
---|---|---|---|---|
All | >1023 | Local | 80 | Allow |
Local | 80 | All | >1023 | Allow |
All | All | All | All | Deny |
The previous 2 combined:
Source IP | Source Port | Dest IP | Dest Port | What |
---|---|---|---|---|
*.isi.edu | >1023 | Local | 23 | Allow |
All | All | Local | 23 | Deny |
Local | 23 | *.isi.edu | >1023 | Allow |
Local | >1023 | All | 23 | Allow |
All | 23 | Local | >1023 | Allow |
All | >1023 | Local | 80 | Allow |
Local | 80 | All | >1023 | Allow |
All | All | All | All | Deny |
Blocking services has limited utility for ftp (active) and some other services because (1) use random ports, (2) multiple services share ports, (3) users can open high ports. Limited to pre-designed services, IP address pairs, no built-in authentication.
Packet filters have strengths. (1) controls tend to be fairly secure, (2) easier to prove imnplementation correct, (3) inexpensive to implement and maintain. They should (1) eliminate known bad IP addresses, (2) eliminate IP address forgeries, (3) eliminate services not in use, (4) send audit trails to trusted audit servers (5) prevent control from remote sources, (6) implemente reasonable passwords, (7) be physically secure.
Packet fileters have weaknesses. (1) inadequate granularity of control, (2) limited authentication, (3) stste-independent decisions, (4) hard to manage complex configurations, (5) tunneling goes undetected, (6) poor or no misuse detection, (7) poor or no audit capabilities, (8) remote configuration limitations, (9) weak and default passwords and accesses, etc.
Encryption in packet filters is provided by some routers. This can includfe key exchange and introduces key exchange issues, is generally limited to same manufacturer router pairs, allows virtual private networks (VPNs), but the chain is only as strong as the weakest link, and remote control and configuration ends up being important here.
Routers between networks provide separation of internal networks, control points for information flow paths, trade speed for protection, require transitivity analysis.
Authentication in packet filters - similar to encryption between networks, time varient password systems, challenge response systems, packet authentication, anthentication daemons.
wg:root /root>