National Info-Sec Technical Baseline

Intrusion Detection and Response

Lawrence Livermore National Laboratory
Sandia National Laboratories
December, 1996


Executive Summary

The state of the art in logical intrusion detection of national information infrastructure (NII) systems is such that a human expert working with a well-developed set of tools can implement a detection system in a few months for a special-purpose computing environment with the properties that it

These systems are used to reduce the amount of information required by systems administrators to make judgments about whether intrusions have taken place, how they are caused, and who is responsible.

Many attempts have been made over the last 10 years to improve this situation, including, but not limited to, (1) using various artificial intelligence techniques to reduce the expert effort involved, (2) trying to automate the detection of anomalous behavior so as to reduce false-negatives, (3) trying to make systems that work across many different computing environments without the need for customization. These efforts have made only modest inroads into the effectiveness of intrusion detection and have not eliminated the requirement for human expert intervention. The most important innovation has been the combination of audit records from multiple sources and the automated removal of irrelevant records.

The state of the art in automated response to detected intrusions is that we can program a wide variety of responses, ranging from increased defenses to offensive counter-strikes. Examples throughout this range have been demonstrated. Several important automated response issues remain essentially unaddressed at this time, including, but not limited to, (1) limiting the effect of automated response so as to prevent cascade and livelock failures that may be caused by the response system, (2) providing safeguards against false-positives and enemy-induced erroneous responses, and (3) using the response system to push the point of attack deflection back toward the attack source.

As a commercial area, automated intrusion detection and response is a healthy and competitive industry that appears to be quite capable of standing on its own. Further research and development funding along existing lines is not needed in order to assure the field's commercial success.

From a scientific standpoint, some substantial gaps in intrusion detection and response remain. Potential research areas include, but are not limited to, (1) basic definitions and mathematical understanding, (2) metrics for comparing systems with each other or to a common standard, (3) weaknesses of intrusion detection systems that could make them ineffective against skilled attackers, (4) consistency and content from information sources, (5) damage assessment and recovery, and (6) unlimited scalability.

It appears that automated intrusion detection and response is an important research area, especially at an NII level. By following fruitful long-term research directions, significant progress with substantial impact can probably be made.


Background and Introduction

Definitions: [Websters75]

The National Info-Sec Technical Baseline

The National Info-Sec Technical Baseline (NITB) provides the state of the national technical capability in critical Info-Sec areas. The purpose of the NITB is to focus the attention of the research community on topics of interest and on the most difficult and challenging problems in need of further scientific explanation. The findings of these NITB are collected in the national repository of Info-Sec information, which will be used to guide further research investment. (1)

The Scope of This Study

Intrusion detection is a very broad area of study with elements ranging from motion sensors to real-time financial fraud detection systems. Our scope will be somewhat more limited. In this baseline, we consider only the detection of non-physical intrusions (2) on digital electronic components (3) of the global information infrastructure (GII). Generally, these elements include such things as:

  1. End User Nodes. For example, computers, digital telephones, certain sorts of radios, direct satellite receivers, and set-top-boxes.

  2. Networks. For example, cables, satellite communications systems, local area networks, telephone networks, and radio waves.

  3. Control Systems. For example, the switching systems that control routing in telephone systems, the Internet's Domain Name System, and the ATM signaling layer.

  4. Supporting Infrastructure. For example, the electric power grid and power stations, air conditioning systems, and the emergency response network.

There are many intrusion detection methods and systems oriented toward applications as opposed to the systems in which those applications operate. For example, in the era of electronic finance, financial audit has become increasingly dominated by EDP Audit wherein the consistency of financial values and restrictions on the manner in which transactions take place are used to detect intrusions.

The number of application-level intrusion detection techniques is perhaps as large as the number of applications. To characterize them in detail would be a mammoth task, beyond the scope of this baseline study. There are some common threads to successful application-level intrusion detection techniques, and where feasible, basic notions and techniques are included.

Some Background on Intrusion Detection and Response

Historically, intrusion detection arose out of the need to automate the detection of intrusions that were being detected manually or, more commonly, not being detected at all. For example, early telephone company intrusions resulted in the creation and sale of blue boxes, which allowed non-experts to obtain free telephone calls.[Levy84] At first, AT&T was essentially unaware of these devices, but as the devices proliferated, AT&T altered their signaling and call setup methods to make this particular attack impossible. [Keevers89]

The Need For Detection

In a somewhat larger AT&T incident, a systems administrator noticed that a user who was unlikely to use the computer outside of normal business hours was logged into the system in the middle of the night. After a substantial effort, it was determined that the intruders involved had widespread access to telephone control systems. It ultimately took more than a year for AT&T to dislodge the intruders. As a result of this and similar incidents, AT&T began a major effort to improve intrusion detection technologies within their networks. In a recent book, a researcher from AT&T published the result that 100 times as many incidents were detected with high quality intrusion detection in place than without it. [Cheswick94]

The need for automated intrusion detection in some systems stems from administrative requirements as well as from threats. A good example is the requirement in some U.S. military systems for the systems administrator to review the audit records each day for signs of intrusion. According to one recent study, the size of the audit records produced in a day of normal use in a typical Unix-based timesharing environment is on the order of 100,000 characters. [Proctor94] A skilled systems administrator is hard-pressed to properly review this quantity of data manually, and it is unlikely that any useful detection results from such an examination. As a result, many DoD elements have introduced automated audit analysis tools that eliminate the vast majority of the mundane information and leave only those items that may be indicative of attacks. Of course any such method is fundamentally limited by the ability of the detection method to successfully discriminate between mundane and important information.

Issues of Time

As the number and frequency of intrusions increase and the time required to cause harm decreases, it becomes increasingly important to identify intrusions earlier and respond to them quickly. In extreme cases, time frames may be well below those where human intervention is feasible. For example, some telephone switch failures and electric power grid faults produce cascading effects that ripple through a large network very quickly. In the case of telephony, there are some systems that must be designed to respond to events within milliseconds to prevent network-wide outages. [Pekarske90] Similarly, power grid cascades have caused two multi-state power outages within the last year. [WSCC96] Although both incidents are believed to be accidental, malicious logical intrusions could produce similar results. In the telephony case, logical intrusions have produced limited outages, and in the power example, a recent logical intrusion caused a power outage to a small city for several hours.[Winkelman95] [Midland95]

Another reason that people have moved toward automation for real-time intrusion detection and response is that malicious attackers now commonly use automation in their attacks. In one published example, more than 2,000 attempted entries were made from more than 500 different locations in less than 8 hours against a single system on the Internet by way of an automated attack. [Cohen96] In this particular case, automated intrusion detection and response allowed the source of the attack to be tracked down within 8 hours of the start of the incident. Without automation, tracking would have been nearly impossible because of the long delays between events and attempts to trace them back.

With higher incident intensity comes an increased need for rapid and automated response, however, even in cases where intensity is not initially very high, automation may be very important. In one case, an experiment performed by the Defense Information Systems Agency (DISA) demonstrated that the threshold of human detection increases in response to a slowly increasing threat. [Cohen93] In this particular experiment, DoD networks were subjected to low-level disruption at first, and the disruption levels were slowly increased over time. Even though the levels of disruption eventually far exceeded levels at which human detection would normally take place, the attack was never noticed. Clearly people are not well suited for detection in this sort of threat environment. A more general conclusion may be drawn from results on reflexive control which indicate, among other things, that many systems can be defeated either by slowly increasing the threshold of detection and response or by overwhelming the detection and response system with false positives. [Giessler93]

Detectability and Response

Without detection there can be no sensible response. For example, many systems produce audit trails that are inadequate for the detection of certain types of incidents. The ability to identify the source of the incident is also vital in response. Otherwise, the response may induce a new incident. An example of such an induced incident is a common denial of service attack resulting from automated responses to password guessing. Many systems respond to attempts to guess passwords by shutting down user accounts used in the attempted entry. An attacker wishing to deny services to legitimate users only needs to give bad passwords a few times for each user in order to succeed in their attack. In some cases, even the systems administration account has been shut down because of the automated response mechanism.

Clearly, the ability to differentiate between different sources of attack is vital to appropriate response. In the example above, associating the location of the attack with the removal of privileges would, at a minimum, permit a systems administrator to gain access from the system console - unless that attack was launched from there. But this example presents only a minor inconvenience compared to some of the possible alternatives.

Responses to intrusions into information systems may range from a notation that the incident occurred to military retaliatory strikes, depending on the circumstances. As an example, an intrusion into systems implementing a fire-on-warning policy could potentially result in military attack. As responses become increasingly severe, there is an increased requirement for high assurance in tracking down the source of the intrusion.


Current Theoretical and Practical Issues

What is an Intrusion?

As a fundamental issue, there is widespread disagreement over the issue of what constitutes an intrusion, particularly within the global computing community, and more particularly within the Internet community. Many consider any unauthorized activity to be an intrusion and specify authorized activities as only those to which permission has been explicitly granted. On the other end of the spectrum, there are many people who believe that exploration of the information infrastructure is their right and that anything they do is authorized.

Legal Viewpoints

This report does not cover the legal aspects of intrusion detection and response, however we note the following: (1) intrusion detection systems are sometimes viewed as intrusive in themselves; (2) legal staff at some sites assert that false-positive rates above 0.01% make these systems unacceptable in their environment; (3) in other organizations the position is taken that all information systems are subject to arbitrary monitoring at any time; (4) as far as we can tell, the legal system has not yet made authoritative judgments on the issues surrounding these issues; and (5) legal, regulatory, policy, and organizational considerations are very complex and are covered in other recent reports. [SAIC-IW95] [Schaefer91] []

Technical Definitions

From a technical standpoint, there is no generally accepted definition of what constitutes an intrusion, and as such, the field of intrusion detection would seem to be lacking in at least one fundamental way.

One definition from the literature states that an intrusion is any activity that moves a system from a safe state to an unsafe state, but this does little to clarify the situation. Another definition declares, in essence, that an intrusion is anything that violates the policy of the site under consideration, but this also does little to address the issues at hand. Perhaps the best coverage of this issue is given in Kumar's dissertation [Kumar95-2] where he identifies several authors who define intrusions in terms of resulting leakage, corruption, or denial of services.

It is unknown whether an unlimited number of possible intrusion methods exist, in large part because there has been no mathematical work in this area. Because of similar results, such as the undecidability of detecting computer viruses [Cohen86] and the well-known confinement problem, [Lampson73] there is reason to believe that the problem of intrusion detection is also undecidable.

If we use a definition based on what people think, (e.g., anything that violates the non-mathematical policy of the site under consideration) coming up with a perfect automated intrusion detection mechanism becomes impossible from a scientific standpoint. Many basic results have not been generated for intrusion detection. This may be because, in most cases, testable theories are not put forth for confirmation or refutation. Without formal definitions, definitive results cannot be produced, and without testable theories, we cannot have scientific progress. (4)

False Positives, False Negatives, and Context Boundedness

Although, or perhaps because, there are no widely accepted technical definitions of an intrusion, it is common for detection systems to concentrate on methods for generating alarms.

It is very common to alarm on any of a set of known attack behaviors. [Denning86] For example, there is a command used in the sendmail electronic mail handling program called wiz that was historically used to allow debugging of sendmail. If the wiz command is enabled, it is likely that the attacker who uses it will gain unlimited access to the computer running sendmail. Since there are few, if any, legitimate uses for this command today, attempts to issue a wiz command to sendmail are commonly detected as intrusion attempts. A slightly more complicated example using a detection threshold is the common practice of alarming on three successive wrong passwords for the same user.

Unfortunately, systems that only detect widely known attack behaviors produce large numbers of false negatives (failures to alarm when intrusions occur). For example, an attacker could guess only two passwords for each user and thus never exceed the three consecutive guess per user threshold while making hundreds of undetected break-in attempts. With only a bit more sophistication, the attacker could observe external behavior to determine when to make further break-in attempts without being detected.

In an effort to capture more intrusions, systems have been developed based on the idea of characterizing normal activities and alarming on any exception. [Heberlein90] As a simple example, most data entry clerks only run a select set of programs on their computers. If a clerk runs a system reconfiguration program normally only used by systems administrators, this might cause an alarm.

One of the concerns with detection methods is that they tend to be context bound by their information sources. For example, trusted computing bases (TCBs) [Klein] produce audit trails of all security relevant events and intrusion detection systems based on TCBs tend to only alarm on such events. Attacks such as computer viruses can operate without triggering any security-relevant events in some of these systems. [Cohen94-2]

There is no widely accepted standard for the form or content of protection-related audit trails and each context may have unique limitations or features. Attempts at standardizing this information have met with little success. At a more global level, there is a problem of incommensurability wherein different information sources may produce information that cannot be reconciled. For example, information available from Unix systems, which most current intrusion detection and response research work concentrates on, is quite different from information available from DOS-based systems. Some operations available in Unix, like interprocess communication, are not available DOS, while other operations in DOS, like load a TSR program, are not available in Unix.

Testing

Overall, protection testing is lacking. Most intrusion detection systems are rarely tested beyond the point of determining that they detect some anomalous events. In the computer virus area, testing has been extensive and is commonly based on shared sets of thousands of virus samples. In other areas of intrusion detection, the research community has not done as good a job of sharing attack examples, and testing has suffered as a result.

The most substantial tests have been done in order to assess statistical properties of detection mechanisms, [Liepins92] but no basic testing theory has been widely applied to intrusion detection, and the notion of coverage is not even developed in the literature. A simple methodology for testing against known attacks has been developed [Puketza96] and early results reveal substantial limitations in current systems. One study of testing showed that intrusion detection results change dramatically when tested against minor variations on known attack patterns [Chung95] indicating that testing has not been a priority for designers. Another study describes tests designed to demonstrate function but not to exhaust or even partition the test space. [Moyer96] Most mathematical results tend to be clustered around the limits of statistical techniques to discriminate intrusions from other activities, [Liepins92] on how to set thresholds so as to optimize results in statistical systems, [Soh95] and on numerical results from specific tests.

Except in the computer virus arena, the literature we reviewed did not indicate any substantial testing against widely available lists of vulnerabilities, [CERT] [AUSCERT] [Kumar95-2] and of the few published testing examples, still fewer have included details of the attacks attempted. (5)

Damage Assessment and Recovery

To the extent that there is automated response today, it consists largely of altering information in system or routing tables so as to prevent further attack through the same route. For example, a table might be altered on a host so as to prevent further login attempts from a particular terminal, or a router might prevent traffic from a particular set of IP addresses.

A vital step in most follow-up investigations of intrusions is the damage assessment and recovery. Unfortunately, the vast majority of systems discussed in the literature do not appear to provide useful mechanisms for damage assessment and recovery, and of those that bring the subject up, none detail methods or capabilities required for this sort of follow-up investigation.

Scalability

In order for intrusion detection and response systems to be of significant utility in the vast majority of modern information technology environments, they must be capable of handling large numbers of events from large numbers of systems. Modern information processing almost always involves local area networks (LANs), often involves wide area networks (WANs), commonly interfaces with the NII, and through international links, may be widely distributed through the global information infrastructure (GII). Performance requirements are also increasing as processing and communications speed increase. Many of today's intrusion detection systems are incapable of handling the load of even one fast PC operating over a high-speed LAN.

By contrast, most current intrusion detection and response systems are designed to detect intrusions at the system level. In most cases where network-based solutions have been implemented, they involve primarily the collection of monitoring data from individual systems and intrusion analysis of that data on a system-by-system level. Some recent work has been directed toward cross-matching activities between systems, but much of this work is in the very early stages and a lot more work needs to be done.

The largest scale for network-based intrusion detection found during this study was work now underway to monitor 5,000 systems, but this is still at least three orders of magnitude too small for NII-level detection and does not involve automated response. Many anti-virus implementations operate at the system level and have been implemented on tens of thousands of networked systems, but central reporting and response is still lacking.

Summary of Current Theoretical and Practical Issues

There are at least six difficult challenges in intrusion detection systems.

  1. The first challenge is eliminating false positives. This is normally done by systematic tuning of detection to meet the characteristics of the particular system. As alarms are given, skilled administrators examine the detailed audit trails, determine whether an alarm was warranted, and if not, devise a method to eliminate that particular alarm in the future. Over time, fewer and fewer false positives occur until the system reaches a state where the workload created by false positives is acceptable.

  2. The second challenge is eliminating false negatives. An unfortunate side effect of the practice of eliminating seemingly false positive alarms is that combinations of such alarms may be real positives. In order to be effective while eliminating false positives, we must not encur additional false negatives.

  3. The third challenge is understanding what constitutes a security relevant event and how to report it. Unfortunately, what many system designers consider a security relevant event turns out to be inadequate for the detection of many attacks. As a result, defenders are sometimes forced to make alterations to the normal operation of systems in order to generate the audit records needed to detect certain sorts of intrusions. Even in these cases, context boundedness remains an issue.

  4. The fourth challenge is devising methods to test intrusion detection systems. In order for testing to be meaningful, test results should provide assurance with regard to the utility of the system in real applications. The rare examples of testing intrusion detection and response in the current literature indicate that systems do not provide much assurance, even for minor variations on attacks they are designed to detect and respond to.

  5. The fifth challenge is determining what damage was done in a detected attack, limiting further damage, and recovering from the attack. In many systems, detection provides inadequate information for follow-on analysis even by an expert. Current systems predominantly indicate that further investigation is desirable but, in most cases, they don't provide adequate information or capabilities to aide in that investigation.

  6. The sixth challenge is making systems scalable to the size required in today's networks. The majority of current systems are designed to detect anomalies on a single computer system and, optionally, report them to a central computer for reporting, further analysis, response, and archival. In order for such systems to be most useful in current regional, national, and global networks, they must be able to scale in such a way as to provide useful information at all levels.


Technologies

Despite the lack of theoretical underpinnings, there are many reasonably effective intrusion detection systems in use today. Some examples of technologies now in use are included here to give a sense of the techniques being used, their applicability, and their limitations.

Cable Intrusions

In the cable television industry, it is common to have trucks that drive through the service area with sensitive RF meters tuned to cable carrier frequencies and with alarms set at particular power thresholds. When these meters alarm, the leakage detection people use the directional properties of antennae in conjunction with the analog output of the meter's display to track down the source of the RF leakage.

In some cases the leaks are caused by inadequate connectors or poor workmanship on the part of the cable company employees, and in these cases, the connections are repaired. In other cases, the leakage is caused by intrusions into the cable system by perpetrators of cable fraud. Cases of fraud are usually differentiated by calling the head-end to verify whether the particular source corresponds to a known customer. Once fraud has been detected, increasing levels of response are used, beginning with disconnection, and, in some cases, escalating to arrest and prosecution.

In cases where legitimate customers get access to unauthorized programming, detection is normally done through the automated billing system using the low bandwidth back-channel of the cable system and polling of the set-top-box.

Cell-Phone Intrusions

Intrusions into the cell-phone system are quite common, usually in the form of the multiple reuse of codes normally used to identify the legitimate cell-phone user to the cellular system. Intrusion patterns typically involve patterns of misuse, including high call volume, high percentage of toll calls, calls to suspicious numbers or locations, calls from suspicious locations, and calls at unusual times of day. Detection methods in common use are based on deviations from normal behavior and detection of known fraud patterns. [Davis92] Recent developments in France [Samfat95] use a simulator in conjunction with (1) detection of baud-rate deviations, (2) impacts of user activities on other network activities, and (3) deviation from normal user signatures, to detect network intrusion detection from mobile communications equipment.

Recent developments have extended formerly military radio-frequency (RF) techniques to enable cell sites to detect RF characteristics of each transmitter and to match them to known patterns of the legitimate telephone using the codes. [McCulley93] If the RF characteristics do not match, an intrusion is indicated. This intrusion detection method holds great promise in an industry where billions of dollars per year are stolen in fraudulent cellular telephone usage.

An automated response capability now in place allows cellular systems to disconnect fraudulent telephone numbers from the cellular network and tear down existing connections. [Davis92] Follow-up response is normally a cut-off of service to the particular telephone number being abused, notification of the affected customer, and reprogramming of their cell-phone for future use. In some cases, perpetrators who intercept cell-phone set-up transmissions are caught by police efforts, but this is relatively rare.

Telephone System Intrusions

The financial impact of large-scale telephone system intrusions has been substantially reduced in the last several years, in large part through the use of automated intrusion detection systems by telephone companies. A good example of this is the drop in the average loss due to PBX break-in from $32,000 in 1992 to $24,500 in 1994. [Mallory94]

A typical intrusion detection program involves the profiling of normal corporate telephone activity by the carrier and the company, and the creation of a set of automated triggering thresholds. [Brewster89] When a threshold is exceeded, the telephone company automatically acts to mitigate harm by stopping the undesired activity and immediately alerts the company representative of the incident. The representative can override any particular decision on an as-needed basis. In one such case, toll fraud was reduced from tens of thousands of dollars per month to less than a hundred dollars per month. (6) Several intrusion detection and response systems are commercially available [Staino94] and, in the limited domain of telephony, they seem to be a cost effective method for reducing toll fraud.

Responses to telephone system intrusions vary widely, and, in most cases, perpetrators are not caught.

Power Grid Intrusions

Wide area power grid problems are normally detected by high and low voltages, currents, and frequencies or phase shifts in major distribution systems. Local outages are detected either by customer reports or by unusual power consumption patterns, and they are generally handled by sending service representatives to the site of the interruption.

Sharing of power takes place because there is excess power in the northern part of the United States during summer, while the southern part of the United States has a power excess during winter - both because of weather. As a result of sharing, large volumes of energy are transmitted over long distances using an interstate power distribution grid. The power grid is continually adjusted using computer controls to compensate for changes in consumption and availability. These same computer controls automatically detect major outages or fault conditions and isolate portions of the power grid so as to minimize damage. Whether the cause of the disruption is malicious or accidental, the effects are automatically detected and response is automatically made, sometimes within a 60th of a second. Subsequent investigation is then used to determine the origin of the fault, and corrective action is taken as appropriate.

There is at least one unfortunate side effect of automated response in the case of the power grid. When a fault occurs, the automated response is normally to isolate the fault so as to limit damage to the rest of the grid. The resulting reduction in available power through one circuit induces an automated response that draws additional power from other circuits. The added load in these other circuits can cause additional faults, which in turn worsens the situation by further reducing the available power. Again, automated response draws additional power from other circuits, and again this may cause further faults, further isolation, drawing power from still other circuits, and so on. [WSCC96] This cascading effect quickly fragments the power grid until each fragment in under-powered areas is unable to support its user base and the power fails to the consumer. In 1996, two such incidents within two months resulted in loss of power to millions of customers for several hours.

A secondary problem related to automated control is that the information infrastructure underlying the power grid is not adequately protected from logical intrusion. Thus, as indicated by an incident in 1995, [Winkelman95] [Midland95] an individual may be able to disable a portion of the power grid by dialing into a control computer via modem and issuing commands. Among the available commands in many power system control computers are the shutdown of an element of the power grid and changes in voltage, current, or frequency. The implication is that a modification of such a system could be designed so as to cause a cascading failure in the power grid through exploitation of the automated detection and response system. By exploiting a small number of such vulnerabilities at strategic times and locations, much of the electrical power in the United States might be shut down, and perhaps kept down for a significant period of time.

A key challenge to this and many other intrusion detection systems is how to automatically detect and respond while limiting damage, allowing for rapid recovery, and preventing the exploitation of the automated detection and response system to promulgate an attack.

Satellite Intrusion Detection

The satellite components of the information infrastructure are commonly designed with special coding of signal and control information intended to compensate for noise characteristics typical of their environment. To the extent that naive intrusions are attempted, some coverage is provided, but this is not effective against intentional abuse. Against intrusions, signal processing techniques using neural networks for discrimination have been used [Barsoum93] but the overall effectiveness of this technique has not been determined.

An example of how response works for broadcast satellites is most revealing. In 1986, the HBO broadcast satellite was taken over by someone using the name Captain Midnight and was used to send a test pattern containing a protest message in opposition to the use of encryption by HBO to prevent theft of its satellite transmissions. [Pessin86] The perpetrator was caught and prosecuted as the result of clever detective work on the part of the FCC and FBI. [Pegano86] Detection was done by human viewers and the response was a man hunt. More subtle attacks would likely go undetected by human viewers, and if this sort of attack became more commonplace, man hunts would likely be inadequate.

In more sophisticated satellite telecommunications activities, there are increased intrusion detection capabilities. For example, encryption is now commonly used in the control systems of satellites to prevent them from being taken over and caused to maneuver into unusable configurations by malicious attackers. [SatCrypt] These sorts of defenses automatically detect intrusions by virtue of the fact that the commands received decode to gibberish. Because of limited power and computing capacity, the automated response is normally to ignore the commands and produce error reports that can be analyzed elsewhere at a later time and different place.

Network Infrastructure Intrusions

Underlying most modern computer networks, but sitting on top of the telecommunications infrastructure formed by telephony, cable, and satellite systems, is the network infrastructure. Substantial network infrastructures have been in widespread use for at least 30 years, beginning with systems that provided remote terminal access to timesharing systems (e.g., Tymenet), extending to corporate networking with technologies such as X.25, [Capel88] and turning, over time, into the current inter-networked set of networks that includes, forms, and encompasses the Internet, intra-nets, and other similar networks.

Intrusion detection in network infrastructures started when companies like Tymenet became aware of malicious parties exploiting weaknesses in their infrastructure to attack their clients' computers. Over time, these service providers developed techniques that detected intrusions based on time of day, source/destination address pairs, and other similar information available about the connections they provided. [Capel88] Similar techniques are used today to protect X.25 and ATM networks.

Connection-oriented networks were the norm for quite some time, but as the infrastructure moved toward the Internet style of interconnect, this sort of connection-related information became increasingly difficult to reliably attain and evaluate. In today's Internet, IP address forgery permits attackers to bypass some protection and detection methods that are based on address-related information.

Many modern network infrastructure providers have little in the way of automated intrusion detection and response against malicious attacks not resulting in network collapse. As an example, some of the major backbone providers for components of the Internet design their networks so that outages can be detected at the network control center, but they have no idea of the traffic going through the nodes, nor can they detect modifications to the nodes that are controlling traffic using their normal operational methods. (8) Even if network traffic could be monitored, modern encryption techniques make content-based monitoring ineffective, and traffic pattern monitoring is a very crude tool that is incapable of differentiating many current attacks from normal network traffic.

Some examples of network-based intrusions that were not detected for a long time include the 1995 theft of hundreds of thousands of passwords by snooping on infrastructure elements [CERT94-01] and the widespread intrusions into DoD computer systems detected manually by DISA in 1994. [AP]

A particularly important issue in providing incident response for information infrastructure elements is that when infrastructure-level incidents occur, they may make other forms of incident response unavailable. For example, when the Internet Virus of 1988 [Spafford89] [Rochlis89] hit, most of the people responsible for incident response used the Internet as their sole means of contact. Since the Internet was degraded by the attack, they could not coordinate their response. Similarly, a recent outage of pager communication could have had dramatic impacts on emergency response because of the high degree of dependency on pagers for contacting response teams in emergencies. Reliable incident response must not depend on the proper operation of the system under attack.

Computer and LAN Intrusions

One of the key areas where a great deal of recent work has been done is in automated intrusion detection and response within individual computers and local area networks (LANs). The fragmented control of the information infrastructure has made uniform response such as that used by telephone companies or cable systems infeasible. As a result, there are many partial solutions for what has become, for lack of a better term, the wild west of the information age.

Automated intrusion detection systems in computers and LANs range widely in capabilities, but they share a common bond in that they are almost all special-purpose solutions. The few exceptions are in fields where mathematical foundations were formed before protection mechanisms were considered.

There are two kinds of intrusion detection systems used in computer systems and LANs today; (1) those that review information based on events that took place in the past (i.e., audit information) and (2) those that analyze the current state of a system. (9) (10)

Audit-based Detection

A lot of literature on intrusion detection is based on the idea of analyzing audit trails. The most rudimentary systems tend to be hard-coded special purpose audit analysis programs that look periodically or on-demand for known anomalies and report them to the user or administrator. [Toure94] [Dowell88] [Courtney] [Shieh91] [Smaha88] [Crosbie95] More sophisticated systems have capabilities to detect and respond in real-time, [Ilgun93] [Teng90] [Lunt88-2] [Venema] [Proctor94] [Lankewicz91] [Bauer88] detection of out-of-pattern behavior (optionally with learned patterns), [Teng90] [Proctor94] [Lankewicz91] [Bauer88] [Winkler88] [Debar92] [Javitz91] [Vaccaro89] [Joyce90] [Denning86] [Lankewicz91] [Smaha88] and detection of intrusions that span multiple systems. [Bauer88] [Jackson91] [Snapp91] [Snapp92] [Mukherjee94] [Banning91] [Heberlein90] [Lunt88] [Lunt92] [Proctor94] [Toure94] [Saroyan96] These systems generally seek to do one of six things; (1) detect known attack patterns, (2) detect deviations from normal behavioral patterns, (3) detect inconsistencies that could not be produced by normal system operation, (4) reduce large volumes of audit data to smaller volumes of more interesting data, (5) filter audit data to provide summaries indicative of trends, (6) combinations of these things.

  1. Programs that detect known attack patterns are inherently limited because the total number of possible intrusion patterns is unbounded and we don't have a mathematical basis for describing them all. This means that these types of systems result in unlimited numbers of false negatives. [Cohen86] Detection techniques vary significantly in sophistication, often involve a descriptive language to allow people to specify what is to be detected, and produce a wide range of different responses. They also have a significant advantage over other detection techniques in that, if properly tuned, they produce a very low portion of false-positives.

  2. Programs that detect deviations from normal behavioral patterns have two basic challenges to meet. The first challenge is characterizing normal behavior and the second challenge is detecting deviations in a meaningful way. [Liepins92] [Helman93] These sorts of systems tend to produce surprising and valid detections in some cases, but also produce a high proportion of false positives, [Lunt88] false negatives, [Liepins92] and hard-to-interpret results. False positives can be reduced, but only at the expense of increased false-negatives. Some systems identify detected events or sequences of this sort as speculative in order to help weight follow-up investigative efforts toward more definitive detections.

  3. Programs that detect inconsistencies that could not be produced by normal system operation are far less common and have only been explored at a rudimentary level. Current tools produce false positives, usually associated with processes that terminate without producing expected side effects, and only detect select classes of attacks.[Ko94] [Cohen95-2] [Bishop95] Experimental systems have only recently been demonstrated.

  4. Audit reduction is the core of many audit-based intrusion detection systems. Rather than try to analyze information in great detail and differentiate all attacks from non-attacks, designers implement systems that eliminate audit information believed to be unimportant to intrusion detection and produce reduced and/or summarized audit records for follow-up investigation by humans. While the goals of many systems may not simply be audit reduction, audit reduction is almost universally used as a tool to reducing the complexity of analysis.

  5. Trend analysis detects trends and changes in trends as a means to inform investigators about factors in the environment that may be of interest. This sort of analysis provides a background for investigation that often leads to detections but is not necessarily indicative of intrusions. For example, if disk write statistics are increasing it may indicate a disk about to fail or excessive activity associated with some sorts of intrusions. With this awareness, investigators may examine information they would otherwise ignore.

  6. Some detection programs use combinations of these techniques in order to balance false positives, false negatives, and performance issues.

Another important way that designs vary is by where and what they monitor. Most of the scientific investigation has involved (1) host-based monitoring, where a single host is examined, and (2) network-based monitoring, where hosts within networks are instrumented so as to feed information to a central monitoring station or the monitor examines network traffic and tries to derive what is happening on the hosts. Many real-world implementations use only choke-point monitoring, where they detect anomalies only at a gateway or firewall computer. In this case, the vast majority of internal threats are ignored except as they cross the choke-point.

Summaries of the research areas covered in this field are given by Lunt, [Lunt88] [Lunt93] who characterizes techniques as including expert systems, statistical detectors, neural networks, and model-based reasoning systems, and by Kumar, [Kumar95] [Kumar95-2] who itemizes expert systems, model-based reasoning, state-transition analysis, and keystroke monitoring. [Eliot95] Detection techniques are used to detect unusual behaviors, deviations from known-good behaviors, and known-bad behaviors. Lunt also points out limitations of many of these approaches, discusses remote audit analysis, real-time and off-line analysis, and legal issues, and suggests that combinations of techniques are necessary in order to build a more comprehensive detection capability. Several other summaries are also available. [McAuliffe90]

Expert effort is almost always required in order to produce usable intrusion detection systems in an application environment. [Toure94] [Proctor94-2] [White96] The customization of expert system rules so as to reduce false negatives and increase detections depends on specifics of the environment. Gathering of statistical data for anomaly detection is environment-specific. Inconsistencies between audit trails depends on the nature and form of redundancy available in the particular environment. Audit reduction for human analysis is almost always customized to the client's interests. Trend analysis is often dependent on specific sorts of trends of interest to particular environments. In systems that combine these techniques a great deal of effort may be required to produce a suitable implementation. Typical figures from implementers in industry indicate that several months of expert time are required for each substantial environment, and that more time is required for more complex environments.

Developers in this field identify the most important advances as (1) the combined analysis of audit trails from many sources and systems and (2) the insight gained by thinking about the results generated by automated anomaly detection systems (but not the systems themselves). They cite the major impediments to intrusion detection as (1) differences between audit sources, both in terms of format and content, and (2) a lack of adequate audit content useful for detecting intrusions. They also note that by combining application-level audit information with system-level audit information, they can do a far better job of eliminating false positives and false negatives.

State-based Detection

Substantial efforts have been made to detect intrusions by analyzing state information in real-time [Ilgun93] [Porras92] and non-real-time. [Farmer90] [Cohen88] [Cohen88-2] [Safford93] [Kim93] [Feingold95] [Yau75] [Joseph88] [Pozzo86] [Pozzo86-2] Well over 1,000 papers have been published in this area, predominantly related to virus and other malicious program detection. Still more papers have been published on similar techniques for application-based intrusion detection. Since computer virus detection is one of the most well covered subareas of state-based system-level intrusion detection, results in that area may be quite revealing, and we investigate them here.

Computer virus detection is undecidable. [Cohen86] [Adleman90] This means that there can never be a perfect virus detection system so long as we allow programming. It has also been shown that in any system allowing general purpose programming, sharing, and transitive information flow, viruses cannot be completely prevented. [Cohen86] Many other mathematical properties of viruses are now known, and these properties have been exploited in the development of virus detection techniques.

There are about 10,000 known viruses today, several hundred of which are now spreading in the global computing community. About five new viruses are identified each day by the virus defense community, and the vast majority of businesses responding to industry surveys claim to detect many computer viruses per year. The number of known viruses and the rate of new virus discovery exceeds the number of other new attacks discovered by at least one order of magnitude. There are far more known virus attacks than known non-virus attacks. Since several anti-virus products detect almost 100% of the known viruses in published tests, anti-virus products detect far more known intrusion types than any other intrusion detection technology.

After more than 10 years of fairly intensive work in this area, many mathematical, theoretical, and philosophical results have been generated [Cohen86] [Murray89] [Gleissner89] [Davida89] [Cohen94-2] and widespread commercialization has resulted. Practical virus detection now comes in one of six forms: (1) programs that detect known viruses, (2) programs that detect code fragments that are similar to those in known viruses, (3) programs that detect behaviors common to many known viruses, (4) programs that detect known-good states and alarm on all others, (5) programs that detect changes in state information, and (6) programs that combine these techniques with prevention to achieve defense-in-depth. [Cohen94-2]

  1. Programs that detect known viruses were very popular when there were a small number of viruses to be detected. As the number of viruses grew, performance was negatively affected, but scanning for known viruses is still a prevalent technique. From an epidemiological point of view, scanning is an effective method for reducing viruses in the general computing population. [Kephart93] This technique is of little or no value against custom viruses designed for a specific attack, as it produces a potentially infinite number of false negatives. On the other hand, it produces almost no false positives if properly implemented, which leads to very effective automated response. An improved variation on this theme is the virus monitor technology [Hirst90] that followed from theoretical results on integrity shells [Cohen88] [Cohen88-2] for optimal virus detection in untrusted computing environments.

  2. Programs that detect code fragments that are similar to those in known viruses are used in a few products and are used by engineering staff in companies that specialize in analyzing viruses, but they have largely failed in the market and do not efficiently detect many known viruses. This technique produces a potentially infinite number of false negatives and false positives, but the false positives tend to be fairly limited when used in conjunction with checksums of known viruses. This result followed an effort by a German research team to break known viruses down into component parts, classify the components, and automate their detection and removal.

  3. Programs that detect behaviors common to many known viruses have been relatively unsuccessful in the market, in part because they produce large numbers of false positives and false negatives. [Trend] [Shieh91] Substantial progress has been made in reducing false alarms, and when used in conjunction with exception lists, these techniques offer some promise. A related technique that has also failed to produce much success is the use of Trojan horse victims designed to be attacked so that the detection method can easily find attacks and decode how the attacks work. This is similar to the honey pots and lightning rods sometimes used in the Internet to entice attackers away from real targets and detect their activities.

  4. Programs that detect known-good states and alarm on all others have been unsuccessful. The major failing is that there are large numbers of valid programs in the world and new versions of new programs appear at a very rapid rate in today's market. There are also many programs that install with minor customizations, making detection of valid versions very hard. This technique has been moderately successful in detecting illegal copies of known software.

  5. Programs that detect changes in state information (e.g., file alterations) [Cohen88-2] have gained only about 20% of the antivirus market. This technique produces no false negatives when comprehensively used, but it produces false positives in any environment without strong change control. This growing commercial segment of the antivirus market grew out of theoretical results showing the optimality of integrity shells.

  6. By far the most successful programs in the market today combine more than one of these techniques in varying ways in order to balance detection, performance, false positives, and false negatives against each other. [Cohen94-2] Defense-in-depth grows out of long-standing and widely known historical results.

Virus detection systems have been widely tested and there are many standard test suites used in published product evaluations. Many products have been tested periodically for many years based on a standard suite of tests, and coverage results are widely published. Economic analysis has been done on anti-virus techniques [Jones89] [Cohen90] and cost effectiveness results are in widespread use. Epidemiological models have been used to understand the large-scale impact of defenses [Kephart93] [Forrest96] and results have been widely used. Results have also been developed for high-integrity situations [Yau75] [Joseph88] [Pozzo86] [Pozzo86-2] where a high degree of assurance is required. Attempts have also been made to use expert systems, neural networks, and many other related techniques for virus detection, but these techniques are not widely used in commercial systems because of false positives, false negatives, and performance limitations.

Response in Networked Systems

As a precursor to some forms of response, it may be vital to create a high degree of assurance that the intrusion has been traced to its proper source. Many techniques are available for tracing intrusions, especially in the telephone system where Caller Number Identification (CNID) provides source number information, cellular communications where individual phones are identified by serial numbers and fraud detection systems can authenticate their electrical behavior, [McCulley93] in cable systems where line taps can be found by using time domain reflectometers, and in satellite systems where radio detection equipment can triangulate perpetrators relatively quickly.

Perhaps the largest exception to this traceback capability is in the computer networking environment. In today's Internet, there are no central controls or methods to trace through the infrastructure without the cooperation of a potentially large number of independent infrastructure providers. In recent years some limited tools have been developed to try to trace an intrusion to a source, but these tools are only effective against the least sophisticated attackers. They do not allow traceback when IP address forgery is used, when intermediate nodes fail to cooperate, when firewalls block further traceback, or when the intruder breaks into an intermediate site in order to launch the attack. [Cohen96] In these cases, cooperation among many service providers may be required. Recent results indicate that only a small percentage of systems administrators respond to requests for assistance in tracing an attack to its source, and of those who respond, only a small percentage maintain adequate audit trails to allow multi-hop traceback.

Regulatory changes in the telecommunications industry may lead to a worsening of traceback problems. For example, regulatory changes in 1996 allow cable companies, long distance carriers, regional Bell operating companies, independent phone companies, and others to compete across the full range of information services. This means that an attacker at a pay phone might connect to an intermediate computer system through a telephone call involving two cable carriers, two local telephone companies, a long distance carrier, and an Internet Service Provider (ISP). From there, the attackers might connect to another ISP through a similar chain of links involving another continent, repeat the last step multiple times, and then launch an attack against the victim's computer systems. Today's technology and coordinated response makes traceback of such an attack essentially impossible in anything like real-time. In this environment, increased emphasis may be placed on behavior-based detection, [Chen95] multi-hop audit trail gathering and analysis, [Cohen96] or line tapping techniques.

With rare exceptions, computer intrusions do not result in the kind of legal response found in other information infrastructure break-ins. The sorts of automated responses in use today generally involve cutting off user accounts, stopping known-malicious programs from executing, slowing performance for intrusive connections, returning false information, creating a jail to monitor methods, increased monitoring in other locations, and in more extreme cases, rebooting computers. More aggressive forms of response have included (1) sending electronic mail to systems administrators at the attacking site, (2) sending electronic mail to the sites providing connectivity to the attacking site, (3) eliminating all access from the attacking site by altering tables in a screening router, (4) informing human operators, and (4) sending mail to mailing lists to try to generate social pressure and manage other peoples' perceptions of the perpetrator.

In some cases, defenders have taken a more aggressive approach to stopping persistent attacks. A good example of this was the response to a distributed coordinated attack (DCA), [Cohen96] in which the defender sent a 1-page email message to the systems administrator, owner, and web-master of the attacking site for each incoming attempted entry made as a result of the attack. Similarly, it has been suggested that commonly known denial of service attacks be used against the computers used by attackers in order to suppress their attacks. For example, several such suggestions were made to Internet mailing lists in response to the SYN storm attacks carried out in September of 1996. (11)

Historically, systems administrators on the Internet have been less than anxious to deal with automated messages from other systems administrators, sometimes treating them as a nuisance in and of themselves. In the era of automated attack without infrastructure-wide automated defense, automated response involving messages to remote administrators may be the only viable option for a defender.

As higher intensity attacks become commonplace, there will likely be an increasing need for automated response at the network level throughout the GII. The goal of such response is ultimately to push the defense away from the victim and closer to the attacker until the attacker is identified and cut off from further attacks. But, as in the power grid, automated reaction to automated attacks at the network level introduces the issue of abuse of the response system. We found no publications on the long-term implications of automated response.

In select areas such as virus detection and response, some classes of known viruses are automatically removed, and in extreme cases, system operation continues without the user even noticing the response. Because of the high degree of indirection involved, tracing computer virus authors is very difficult, especially at the point of attack. For this reason, most virus defenses don't do anything to try to trace the virus to its source.

The Big Picture

At the overall information infrastructure level in the United States, there is no comprehensive or all-encompassing intrusion detection or response system or method. In effect, each individual and organization is left to fend for itself. Perhaps even more disconcerting is the potential for widespread confusion at the Federal level if serious infrastructure-wide attacks such as those anticipated for information warfare should arise.

The teams responsible for response at the Federal level are woven into a complex fabric that defies overall understanding. At some levels of intensity for some sorts of intrusions, there may be five or more federal agencies with some degree of responsibility for response. [SAIC-IW95] Current efforts to coordinate closely between the CERT and CIAC incident response teams is a good start toward improved cooperation in the response phase, but without any NII-level intrusion detection mechanism, their efforts can only be in response to local detections. The FIRST teams around the globe also offer some response capability based on a cooperative international effort, but again, there is no global intrusion detection scheme.

The recent changes in U.S. telecommunications laws are likely to substantially complicate the detection and response situation. This would seem to be a ripe area for automated tools designed to gather and analyze widely divergent audit information from extremely heterogeneous networks. To get a sense of this, telephone switches, cable converters, Internet connections, gateway computers, satellite links, X.25 networks, Local Area Networks, Wide Area Networks, and other technologies may all be involved in one wide-scale attack.

In the aggregate, there may be no choice but to use intrusion detection and response technology for NII-level defenses at this time because of a lack of adequate alternatives. Every other technical alternative at the national level today involves securing a substantial portion of NII systems, and today, the vast majority of systems are very insecure. The cost of securing many of the elements of the NII is clearly substantial and may be beyond reach, so some alternative is needed. Intrusion detection is not the only hope, but it may be the best alternative at or for some time.

Some Comments

Despite the wide range of capabilities, except in the limited area of computer viruses, few current systems appear to have a sound theoretical or mathematical basis for deciding what to call an intrusion; there is no notion of coverage developed in the literature; only a few papers include any mathematical analysis or notion of efficiency; and the published literature has not seriously considered the issue of context-boundedness.

At a more detailed level, many current intrusion detection systems seem to fail against attacks that take the possibility of such a defense into account. Just as many motion sensors can be fooled by moving at a pace below their thresholds of detection, logical intrusion detection systems seem to have detection thresholds below which attacks may persist. Similarly, many of the models and systems use aggregate behavior for making decisions, and a clever attacker may be able to carry out a relatively high intensity attack using one parameter while covering it up using another compensatory parameter.

Even in the relatively narrow domains covered by some of the special purpose intrusion detection systems, there are fundamental flaws that might be exploited. For example, the analysis of audit trails for real-time intrusion detection depends on real-time generation of audit trails, but events such as program execution are often not reported until after the program terminates. In many systems, an intruder who can successfully bypass the operating system protections in a short period of time can also prevent the audit trails from reflecting their activities. Similarly, it is often possible to overwhelm audit capabilities so that they do not record all of the actions taken.


Findings

Based on the examples in this report, the written literature reviewed during this study, discussions with intrusion systems developers, and demonstrations of intrusions detection systems, we believe that the following description is indicative of the current state of the art in logical intrusion detection and response for NII systems.

What Can Be Done Today

There seems to be widespread agreement that current intrusion detection systems are advisory in nature and that they are most useful in providing human experts with assistance in the detection of and response to intrusions. There is essentially universal agreement that no current or anticipated systems are adequate to eliminate 99false negatives. When more false positives are eliminated, more false negatives appear, and when more false negatives are eliminated, more false positives appear.

The process of creating an effective balance between false positives and false negatives currently involves a substantial amount of effort by experienced experts in the intrusion detection field and well-developed tools intended for customization by those experts for the particular situation. The amount of time and effort required for customization varies with the specific requirements, but a reasonable estimate for a typical limited-use industrial environment is that 2-5 months of effort are required for this customization process.

Except in limited situations such as the detection and removal of known viruses, these systems are predominantly used to reduce the amount of information required by systems administrators to make judgments about whether intrusions have taken place, how they are caused, and who is responsible.

Recent Efforts at Improvement

A substantial amount of effort has been made over the last ten years to improve this situation, but these efforts have only been marginally beneficial. For example:

These efforts have resulted in the development of some tools that make the experts more productive, but none of them have had substantial impacts. Of the implementers we talked to, most asserted that the testing of new techniques by researchers produced some revealing results, but the development of prototype systems using these techniques were almost all failures because of their lack of understanding or attention to the real needs of customers.

This sort of difference of opinion between researchers and product developers is not uncommon, and it does not, in and of itself, indicate that anything is out of sorts. There are also points of agreement. For example, many researchers and developers agree that the combinations of audit records from multiple sources was an important research contribution and that techniques for lossless or low-loss audit reduction are very beneficial.

Automated Response

The most controversial area we examined was automated response. Many of the reviewers of initial drafts commented that this area is very dangerous. Clearly the cascading effects in the power grid and phone systems indicate the potential for harm. Movies like War Games give a clear indication of the risks associated with automated military response, and we would clearly like to avoid this level of automation under any circumstance.

On the other hand, automated response is clearly necessary, especially in critical infrastructure elements involving speeds or volumes beyond the human capacity to respond. The DCAs in the Internet demonstrate that certain classes of automated response may be the only hope for stopping an ongoing attack. Response within a millisecond as required in some telephony applications is clearly unattainable without automation. Response to massive leakage of information vital to national security over a 650 million bit per second ATM connection must be automated in order to prevent large volumes of information from leaking before a human could react.

Although published research results we have found did not cover this area, we believe that the combination of automated response and potential harm from response calls for the development of failsafe methods by which we can make responses which cause failures in a safe mode. This would seem to be a vital area for research.

The options for response are limited only by our ability to cause them to take place. There is no technical restriction to causing a particular sort of response to happen when a particular sort of incident is detected. Most current systems respond by making a record of activities and notifying human respondents. People then investigate further and take actions suitable to the situation.

In even slightly more automated systems, reflexive control attacks have a tendency to produce substantial problems. For this reason, most companies keep people in the control loop except when the system being monitored is very well engineered to the specific purpose.

Commercial Viability

Commercial vendors we talked to asserted that further research into intrusion detection was not required in order for them to progress in their intrusion detection businesses. Not surprisingly, researchers asserted that the commercial advancements were the direct result of research in intrusion detection.

The reports we heard indicated that, as products, intrusion detection systems are becoming or have been profitable. If commercial interests believe that their products are viable, non-private research money is probably not necessary in order to support the industry.

It would seem to follow that the areas where support may be helpful to advancing the field are areas where industry is not viable.

Scientific Research Areas

From a scientific standpoint, some substantial gaps in intrusion detection and response remain. The most substantial gaps appear to be:

Conclusion

As a research area, intrusion detection is still viable, but the most vital areas for research are not currently being followed. Some rethinking and redirection of research efforts would seem to be most appropriate.

One of the things we hear a lot from the research teams was that research money is being directed at development efforts involving small amounts of research. This appears to be a side effect of the way research funding now works, with graphical demonstrations required in short time frames and deliverables including prototypes that are immediately usable in application environments. Although we do not offer a solution to this issue, we note that the lack of deep scientific research results may be related to this funding issue. To the extent that funding can include an apportionment for fundamental research, we believe it will likely benefit this particular field at this particular time.



Footnotes

  1. No reference is available at this time.
  2. Intrusions resulting from received bits of information - often referred to as logical intrusions
  3. Components dealing with discrete logical inputs, states, and outputs rather than continuous analog values
  4. Some reviewers have commented that they disagree with this result. Perhaps most cogent remarks were:

    My personal view is that the reason there is no formal definition of an intrusion is that what constitutes an intrusion is completely dependent on site specific policy. Since policy varies very widely from site to site, all ID can do is provide general mechanisms which can be used to detect violations of common kinds of policies.

    I don't view this a problem. Lots of engineering disciplines get along just fine without precise mathematical definitions of their fundamentals. (e.g., ask a civil engineer to give you a mathematical definition of "bridge" that would enable you to reliably distinguish all instances of "bridge" from anything else. They couldn't, but that does not mean that civil engineering does not contribute helpfully to the design and construction of bridges.)

    It's fairly clear that there are things we would all understand as intrusions that cannot be detected by any means because there is no information in the intrusion which distinguishes it from a non-intrusion. (e.g., masquerade across the Internet - assuming a close enough impersonation).

  5. Several reviewers of this report have asserted that one of the reasons that testing is inadequate is that standard attack suites are not available. This challenge was met by the anti-virus community through organizations like CARO and by attackers posting new viruses to bulletin board systems accessed by researchers who then shared the data. We also note that known attack suites fail to resolve the underlying issue of a poorly defined set of faults and failures required for a deeper understanding of the protection testing issue.
  6. This example is based on personal and confidential contacts within a large corporation.
  7. A personal experience of one of the authors of this report.
  8. This information comes from confidential conversations with those who operate some of these networks.
  9. There was some confusion about this distinction, particularly in the example of a system that observes network traffic in near-real-time. In our discussion, we consider this to be audit information because it examines events after they take place.
  10. In discussing the distinction between these two views, we noted examples where audit information fails to reflect the actual state of a system and examples where state information fails to reveal what took place in the past. This seems to imply that both are necessary, but perhaps not sufficient, for comprehensive detection. We did not find a reference in the literature reflecting this result.
  11. These responses might not have been effective because of the use of forged IP addresses in these attacks.
  12. These test results were privately communicated under conditions of anonymity but were not published.


fc@pc31