Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection program success depends more on prudent management decisions than on the selection of technical safeguards. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
Managers are always looking for analogies to make things easier to understand - for themselves and for others. The analogy of biological viruses to computer viruses was one of the reasons they were called computer viruses in the first place, so it seems logical hat we should talk about them together in these terms.
The analogy of biology to computers is limited - at least at the low levels - in that the underlying mechanisms of cellular biology and computer functioning is quite different, but when we look at the macro view of how computers and information operate in the large, the strength of the analogy to the way the biological systems of Earth function can be quite striking.
Today, I am going to talk about things from the perspective of biology and reflect on how close the analogy is to the computing environment today. I should pause here to note that in 1988, Bill Murray wrote a paper for Computers and Security (V7 p 139) titled "The Applicaiton of Epidemiology to Computer Viruses" which covers similar material and that interested readers would be enriched by reviewing this article.
I am reading a very good book on emergent diseases ("Emerging Viruses" - edited by Stephen S. Morse, Oxford Press, 1993) and on page 72, Bernard Fields' article has a nice description of how viruses and hosts interact (table 7.1). I reproduce the table below for reference along with a new table (Emerging Computer Viruses" that I will explain shortly.:
|
|
Stability in environment: Just as biological diseases must be able to survive in the environment outside the host for long enough to get from host to host, so do computer viruses have to remain stable in the environment so they can survive the trip from host to host. For example, a gigabyte computer virus is probably too big to survive in the Internet of today, and a virus that only functions in a very tightly coupled time-dependent fashion with other replicants will not likely make the trip from host to host over a substantial distance.
Entry into host - portal of entry: As in biological viruses, even though there are hundreds or thousands (biological) or hundreds of thousands (computer) of viruses, they only enter hosts through a relatively small number of portals. Thus the portal of entry of a new or emerging virus is likely to be one of the ones that the hosts and their environment facilitate. Today, that is the Internet - typically via email or the Web or some common service or via floppy or CD-ROM. A new virus will only succeed if it can use a portal of entry that functions well, and the nature of the portal and the rate at which the portal is available limits the ability of the virus to spread. Some viruses (biological and computer) have multiple portals of entry and this makes them spread more effectively, but it also limits their structure. Just as the GI tract and nasal passages have their own micro-environments, computer entry points have their own micro-environments. The virus has to survive the path into the host.
Localization near the portal of entry: Just as biological agents need to find a fertile place to attach to upon entry (or they float around and back out again), computer viruses need to latch onto something in the computing environment that will either store them for later interpretation or interpret them right away. If they don't get a chance to interact with the host, they will not be able to reproduce. As in biological viruses, minor changes in the virus or environment - as little as a single amino acid sequence difference or a single bit flip - can make the difference between rapid reproduction and stasis. A classic example from biology is a single surface amino acid change that makes influenza pandemic in geese, while a single bit flip in the Xerox worm caused it to go from a controlled software device to a rampant infection throughout the Xerox network (1970s).
Primary replication: Some sort of primary replication has to take place in order for the virus to produce any amplification of the single infective entry agent. While a computer virus or biological virus could simply reproduce and send itself out into the environment in multiple copies without leaving any host infection, this is unlikely to be successful because of the challenge of transport and the inability of the virus to survive a single failure in the host. The viruses that survive reproduce in the host, build up a biomass, and then spread beyond the host, or at least build up a biomass while spreading beyond the host.
Non-specific immune response: There are a variety of changes that happen in the body as well as in computers when viruses infect them. For example, a computer may use extra computing cycles, disk accesses, or display other behavioral changes. Similarly, the body has various non-specific responses, such as irritations and changes in blood flow. This area is one where I don't understand the issues very well.
Spread from primary site: From the site of first infection, it is common for biological and computer viruses to spread within the host. For many biological viruses, the precise port of entry is not known, but they clearly spread throughout the host. In the case of computer viruses, spread might include the movement into more and more executable files, but a better analogy is found in systems such as Unix where the spread goes from user to user, perhaps even crossing the user/operating system barrier and reaching the operating system with direct access to the CPU and hardware. Biological viruses spread throughout the body systems, sometimes crossing the blood/brain barrier and reaching the central nervous system. Biological viruses may exploit macrophages or be defeated by them depending on the specifics of the interaction. They spread through blood flow and spread along with other biological components that move throughout the body in the normal course of bodily function. Similarly, viruses spreading between users in timesharing systems tend to spread along the pathways of shared programs and data. We may think of a timesharing system or local area network as a body and a virus spreading through it as following normal data and program transport mechanisms, such as the file sharing mechanisms (free in blood) or the execution of infected programs by uninfected users (in association with cells).
Tropism: Different viruses spread in different ways depending on how they are introduced. The same biological virus will have a different course of infection and follow different pathways depending on (1) host factors like age, nutrition, and immunity, (2) entry point, (3) what kinds of internal components the virus can attach to and effect, and (4) how internal components are configured. Was I talking about computers or biology? Both! More complex viruses, or viruses with a broader range of infective capabilities and modes of spread will have more and more complex host interactions.
Secondary replication: As the virus invades more and more of the body, infected areas generate more and more replicas of the virus, mutations may occur either from random chance or as a direct result of the programming of the virus, the viral biomass grows, and there may be noticeable effects on the ability of the host to function.
immune response: Bodily systems react to invasion by detection (virus specific and generic) and response. This is different than prevention, which takes place at every phase of the virus lifecycle through things ranging from barriers at entry points to hostile environment in the stomach. Similarly, computer systems today include virus-specific detection systems such as known virus scanners and general purpose detection systems such as integrity mechanisms (which are not widely used) and of course symptomatic detection at the level of human detection thresholds. The overall computer virus response process when it comes to humans noticing viruses is similar to the human public health response system. Doctors (systems administrators) and patients (users) start the analysis, and if the disease is doing real harm, they report it to public health officials (the antivirus industry and law enforcement). Cures are devised in very short times for computer viruses (relative to biological cures) and, in some cases, law enforcement seeks out the root cause and tries to eradicate it (e.g., pest control).
Release from host: People sneeze, cough, sweat, bleed, breath, salivate, urinate, defecate, and sexually exude out viruses once the biomass of the virus within the body reaches a stage where it is expelled either by force of physical pressures (as in coughs and sneezes) or by a different vector (such as an insect bite). Similarly, computers release viruses through the mechanisms by which information moves from host to host. Today, these typically include network protocols (e.g., UDP viruses), file sharing (e.g., NFS and SMB), disk sharing (e.g., floppies, CDs, Zip, Jazz, and other disks), application level exchanges (e.g., email, embedded files, cross-platform links, macros), and human exchanges (e.g., the 'good times' perception management virus and chain letters). As computing technology provides new methods of information exchange, new viruses will find fertile ground.
Darn these biologists are good!
Just as computer viruses can be a close match to the biological situation, manual attacks by expert humans tend to follow the same methodology. In fact, this is really doing it backwards. the automated attacks are written after them manual techniques are well developed and tested.
|
|
|
Stability in environment: In the case of the human attack against hosts and facilities, a very similar process takes place. There is, or course, the requirement that the attack capability be stable in the environment. For example, in an underwater facility, diving gear may be required in order for the attacker to get to the host.
Entry into host - portal of entry: Entry is through a portal of one form or another. It could be a door in a physical entry, a protocol in a network entry, a Trojan horse sent to a user in the host, a replacement for a software distribution, a corrupted component, or any of a host of other mechanisms that exploit different portals.
Localization near portal of entry: In many cases, an attacker who enters a system first secures a position within that system for reentry and removes immediate signs of entry.
Primary modifications: Once within the system, the attacker commonly makes modifications to assure reentry. An example is the positioning of a .rhosts file in Unix hosts or the implantation of a program that tries to make external contact for execution of commands through a firewall, such as the recent ILOVEYOU virus which, once implanted uses the Web to retrieve further instructions.
Non-specific immune response: The host may have a variety of responses to this sort of attack that are not specifically designed for the attack, but which are nonetheless effective to some extent. For example, the host might run periodic checks of system files such as the .rhosts file and detect changes, or it might look for processes that don't have logged in users associated with them and shut the processes down.
Spread from primary site (privilege expansion): Many attackers who have gained a foothold expand privileges by using an attack that works from within a system but would not work from the outside. There are many examples of mechanisms that gain root access once implanted in a timesharing system.
Program and data tropism (hiding): Common practice is to use a rootkit or similar technique to conceal the presence of attack code within a system. Modern rootkits are quite complex and involve modifications to almost every aspect of the host's operating system key components that would otherwise be used to detect the presence of the attack. This process may also cause other side effects such as altering disk usage and memory usage as well as network behavior and similar facilities.
Secondary replication: Once hidden and implanted, many attackers will continue to add new entry points and secondary capabilities to enable assured reentry. They may also do other kinds of damage.
Human and program immune response: If the attacker is detected a wide range of responses may be undertaken, ranging from investigation and traceback to automated removal and replacement with original system information.
Release from host (spread on): Once well implanted within a system and well hidden from the system owner, attackers may use the host as a leaping off point to other hosts both within the local network and in other networks. This technique is often used in attacks like one recently discovered that involved DNS exploits. In one recent case, from one such host, at least 200 other hosts all over the network were successfully attacked.
It appears that, to a limited extent, even manual computer attack is similar to biological processes. But now let's look at how the analogy works in the large.
The host-by-host course of disease seems quite similar across the biological and informational domains. Perhaps this is just the nature of living organisms competing for survival. But in the larger scale, still more factors come into play. Let's take a look.
The emergence of communicable diseases in the biological world is generally regarded as an emergence because the life forms that produce disease in people are not being created from the ether. Rather, they are existing life forms competing for survival in a changing environment. When the environmental conditions change to the point where the ecological balance shifts diseases that were relatively dormant, at least as far as going unnoticed by human populations, become dominant and thus seem to 'emerge'. In the information world, the situation is a bit different today. Very few computer viruses from ten years ago emerge today - per se. This is not to say that they never will or can again emerge, only that this is not the situation we normally face. Rather, we see two phenomena. The same viruses - in concept - emerge again because of the reuse of old ideas by attackers and environmental conditions that are similar to what they were in the past; and new variations on old viruses are created through minor modifications of existing code.
The second kind of emergence - slight modifications of existing computer viruses - corresponds roughly to point mutations in the biological world. Most of the mutants have little impact. Some fail to operate at all, while others fail to gain a foothold in the current ecology. Even very rapid evolutions - within hours of the emergence of a major new strain - such as the Christma.Exe (1988), Melissa (1999) and ILOVEYOU (2000) viruses do poorly by comparison - presumably because the environment changes during the period of a successful disease so as to increase the hostility of the environment to the new virus strains.
The first kind of virus - recreations of old successful viruses updated for the new environment - such as Melissa (1999) and ILOVEYOU (2000) which are recreations of Christma.Exe (1988) - tend to be quite successful and often become pandemic.
We do not see highly sophisticated viruses such as those postulated in theoretical discussions of viruses. While in the laboratory, a serious scientist can easily create these strains, it appears that the people who release viruses into the general population have not yet reached that level of sophistication.
The whole attack evolution process seems to go something like this:
(1) Isolate: A new attack idea pops into somebody's mind. They try to use this idea and, if it succeeds in a test case, they either use it manually for a while, or if it requires some coding, they manually control the coded portion as part of an attack. If this is done by a defender (as part of their research), it is reported to the vendor for countermeasure creation and release but is never used in an actual exploit at this point.
(2) Release: The attack tool is released into the social environment of the Internet and is exploited in independent attacks under manual control. As soon as a defender sees it, it is reported to the vendor for countermeasure creation and release and reported widely to the security community.
(3) Enhancement: Somebody decides to automate more of the process and make a more rapid and easier to use attack tool. This is then used by many more people and spread through the Internet again. Again, this is rapidly reported to the vendor and the security community.
(4) New Viral Strains: Somebody decides to include the new attack tool in a fully automated process - such as a virus - and the attack process, along with either remotely alterable or fully pre-programmed damage, is released and spreads through the world. This reaches the general public and is rapidly reported to the vendor, the security community, and the general public.
(5) Adaption: Minor modifications are made on the emergent disease within a few hours of its epidemic spread and they are released. Some of these are reported to the public, but soon only the vendor and security community are alerted to new strains.
In each step of this process, the pathogenesis steps are also involved on a local basis, and for each strain, large-scale epidemiological effects are seen.
The epidemic process is normally only seen on a large scale for situations that go beyond isolates. Isolates can sometimes be held in confidence for many years and exploited only when needed so that they remain stealthy and exploitable by a small population of attackers. Unless they cause obvious harm on a large scale or are detected by a strong defender, they will continue to be effective in the small, but are unlikely to have large-scale effects unless they attack critical infrastructure elements upon while many other things are dependent. Isolates are similar to some of the murder technologies and spycraft technologies seen in historical cases involving governments.
Once attacks reach release, large-scale social effects take hold. Over periods ranging from minutes to months, the community responds. For people under immediate attack and who are able to detect the attack, a wide range of defensive strategies are employed on a case by case basis. The information on these defenses are shared and become part of the knowledge structure for overall public health in the computing arena. This information is generally tracked by professional systems administrators just as details of diseases and treatments are generally only tracked by doctors. Companies who make products that are susceptible to these attacks are also notified and, in some cases, respond rapidly with patches to reduce susceptibility. In the best case, these responses take hours, and the global delivery of these patches over the Internet is achieved in hours. Typically, these patches take from days to weeks to complete and are installed over a period of months to years. In some cases, such repairs are not undertaken by vendors and systems remain susceptible for a long period of time. For example, Microsoft has never provided adequate protection capabilities for its macros and visual basic systems, even though some minimal controls and improved defaults would largely eliminate many of their vulnerabilities.
Enhancement is typically taken to allow more people to more easily exploit a vulnerability. The ongoing enhancement and release of new and improved versions of attacks is only effective against susceptible systems, so that vendors who do effective repairs of the underlying problem escape this process to a large extent. For the sorts of issues we see today in mobile code (i.e., systems fundamentally based on trusting general purpose programs sent from system to system by untrusted third parties) typical repairs are ineffective against such enhancements because it is the fundamental nature of computation that sharing and general purpose function leads to the potential for viruses.
New viral strains take the automation a step further by providing the means to automatically reproduce the automated attack. That is, it transforms the previously manual aspects of the process into fully automated processes. When viruses enter the picture, epidemics are possible in the medical sense, and we see pandemic situations arise several times a year. For computer viruses, the analogy to biological systems is quite strong. Typically, new strains (e.g., Melissa and ILOVEYOU) are quite different from an epidemiological view than minor adaptions of these viruses, so we will treat these adaptions below and constrain the discussion to these viruses here. Depending on the portion of susceptible strains in the host population and their communications interactions, new strains spread from host to host in periods ranging from seconds to hours. Although slower strains may be devised, except for UDP viruses, no current strains can spread in less than a second, and those that spread in less than a second typically only spread between a very small number of hosts (one or two). For a sense of proportion, Melissa and ILOVEYOU typically spread only when the user reads email, so even though the virus may be within the host, it typically does not leap from host to host for minutes to hours. Systems administrators that run major installations typically detect the change in activity patterns within a few minutes to a few hours of the spread of a rapid virus in their environment and devise local solutions to limit epidemic effects. Some may devise a filtering mechanism. This has the effect of reducing or eliminating stability in environment. Some shut down servers or block services to give themselves time. This has the effect of preventing entry into the host by eliminating the portal of entry. The anti-virus vendors typically take from two to twelve hours to get enough of an understanding of a new strain to devise a temporary fix for it. This is then disseminated over a period of less than an hour to major sites. From a public health standpoint these are most important in reducing the overall condition by reducing the paths of transmissibility. Typically, such viruses will reach epidemic proportions within 8 hours of first release (based on an early weekday release), become pandemic within 24 hours, start to wane because of reduction in susceptible hosts and public health effects of antivirus products within 36 hours, and be reduced to a minor inconvenience within 48 hours. Small outbreaks may remain for up to a week later and an occasional copy will appear over the next month (typically as users who were on vacation return and invoke a local copy not previously destroyed). Long-term solutions to virus problems are not currently undertaken in a significant way.
Adaption of new viral strains typically happens within a few hours of the initial virus release. This is facilitated in cases where viruses are not stealthy and don't use concealment techniques to keep their functions hidden. Simple adaptions to the Melissa and ILOVEYOU strains were made the same day the virus started to spread. Before ILOVEYOU became pandemic (but after it became epidemic) several adaptions were spreading. Adaptions typically fare far less well than the strains they follow and reach only about one fifth of the saturation of host populations of the original virus for the first few adaptions and far smaller numbers for subsequent adaptions. Generally, the further the adaption is from the original strain, the more likely it is to have a larger effect, however, the high alert level of the susceptible population once a strain becomes pandemic results in significant changes in behavioral patterns, resulting in a far different environment for the adaptions than existed for the original strain. Generally, this environmental change makes the introduction of new adaptions into the environment at a high rate a far less successful strategy than one might initially think. The introduction of new strains at a rapid rate would likely cause far more dramatic changes in the environment, possibly even impacting the way hosts interact and the design of future hosts.
As we can see from the descriptive analysis above, the relationship of biological and informational viruses is useful in understanding the situations that arise today in dealing with virus outbreaks. One of the important things to notice here is that there are a lot of steps involved in the pathogenesis and emergence of computer viruses that are not being widely used for defensive purposes. A second item of interest is that from a public health perspective, defenders are not always considering their decisions as carefully as they might and they may be able to do a more cost effective job by better understanding this issue.
Pathogenesis: The 10 steps involved in pathogenesis are all places where we can implement or improve our defenses. we tend to focus on stability in the environment (e.g., virus scanners), primary replication (e.g., behavioral controls to prevent people from invoking the strains), and disease-specific human and program immune response (i.e., behavioral changes and known attack detectors). But we can also make progress in other areas. For example, entry into the host can be controlled (e.g., by turning off unnecessary services and detecting attempts to use these services and responding by terminating all service to the remote host), localization near the portal of entry can be blocked (e.g., by turning off interpretation of visual basic when entering from remote hosts), non-specific immune response can be enhanced (e.g., by using integrity controls), spread from primary site can be reduced (e.g., by using isolated subfile system areas such as chroot environments for services), tropism can be reduced (e.g., by using read-only file systems and files), secondary replication can be reduced (e.g., by detecting close similarities between inbound and outbound content and checking further when this happens), and release from hosts can be affected (e.g., by user confirmation of outgoing connections and messages before they are permitted). While some of these may or may not be ideal for any given environment, each is applicable in some environments.
Emergence: The process of emergence can be controlled by changes in social and legal aspects of the environment we work in. While isolates are unlikely to be prevented, deterrence can be effective against the use of of isolates for real attacks. Release is currently legal and there are social groups that form their social bonds by the creation of releases. To prevent release would seem to violate the right of free speech, but this is not as clear as it might seem. The release of attack code could be likened to the release of bomb plans, chemical and biological warfare agent design methods, and other similar items that may fall under the category of non-protected speech. It is one thing to be able to talk freely about vulnerabilities in a general sense, but the widespread release of attack code could be held to be equivalent to inciting a riot or providing burglar tools. Enhancement once release has taken place is hard to stop, but it might be effective to introduce legal punishments for the publication of these enhancements. Similarly, civil law suits may be effective against release and enhancements. Another social change would be the use of social pressure (and product purchasing pressure) against companies that do a poor job of responding to release and enhancement by failing to improve their products. An unbiased tracking of what companies perform how well in this arena might be a useful addition to consumer reports. New viral strains are the natural side effect of our computing environment and malicious people with expertise. Still, these strains would be significantly reduced by limiting the process that lead to their exploitation and by catching and punishing more of the people who initially release these strains. Adaption is currently treated with only minimal effort, but if those who adapted viruses for rerelease were sought and prosecuted with the same vigor as those who initially released these strains, it might have a significant deterrent impact. All of these aspects of emergence would also be greatly impacted by a reduction in anonymity in the Internet. The ability to rapidly trace all such activities to sources would certainly change the nature of attack and defense as well as dramatically changing the profile of the successful attacker.
Epidemiology: The epidemiological situation from the standpoint of viral attacks is highly reactive today and reasonably effective at reducing the time between initial infection and availability of profelaxix and, in some cases, cure. This has the effect of reducing susceptibility and reducing the infected population. Unfortunately, this situation is a poor one. For example, if either the Melissa or the ILOVEYOU viruses had overwritten all noncritical files in the host with copies of the virus, the damage would have been astounding. These epidemics were the direct result of a lack of environmental diversity, an environment in which mobile code has become the norm (but in which it is largely used for trivia rather than truly valuable functions), and environment in which untrained users are given carte blanch to use powerful tools in a global environment, and an environment in which ease of use and time to market have been treated as more important than safety. The spread rates are also the direct result of higher speed and lower delay-time communication, faster computers, a lack of caution in the retention and control of confidential information in computer systems, and poorly chosen defaults. If we continue to use these after-epidemic detection and cure methodologies, it is highly likely that very serious damage will result from a more malicious virus in the near future. Clearly, we need a better system.
While I don't think I have solved all your management ills in this exposition, I do hope that I have provided you with some ideas about how to better use your resources, or at least some ideas about where to start trying to do better.
People ask me why a teenager can bring down a significant portion of the Internet, and I guess the answer is that the Internet is being built by a bunch of relatively unskilled and undertrained workers. And did I mention that this situation is getting worse? With the current job situation, several million new IT workers are needed in the next few years, but only a few hundred thousand will be educated. We need to change this in a serious way if we are going to continue to move toward a biological environment. And don't kid yourself. The information age, for all of its promise of benefitting humankind, is still an environment of survival of the fittest. If you don't make your defenders more fit for this job, the implications are clear for the survival of your organization.
The logic of biology is inescapable in the Internet of today. The relationships are clear, and we need to start learning from the long history of biology in order to be effective in the information world of the future. Similarly, biology will ultimately benefit from the the lessons we learn in the computing environment.
About The Author:
Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, helping clients meet their information protection needs as the Managing Director of Fred Cohen and Associates in Livermore California, and educating defenders over-the-Internet on all aspects of information protection as a practitioner in residence in the University of New Haven's Forensic Sciences Program. He can be reached by sending email to fred at all.net or visiting /