A Preliminary Classification Scheme for
Information System Threats, Attacks, and Defenses;
A Cause and Effect Model; and
Some Analysis Based on That Model


by Fred Cohen
Cynthia Phillips, Laura Painton Swiler, Timothy Gaylor,
Patricia Leary, Fran Rupley, Richard Isler, and Eli Dart

Sandia National Laboratories, September, 1998
(Some of this work was funded by the U.S. Department of Energy)

This article is now pre-publication in The Encyclopedia of Computer Science and Technology

Abstract:

This paper (placed at the end for readability) describes 37 different types of actors that may Cause Information System Failure (Threats), 94 different Mechanisms by Which Information Systems are Caused to Fail (Attacks), and 140 different Mechanisms Which May Prevent, Limit, Reduce, or Mitigate Harm (Defenses). These are gathered from many different sources. Where a single source for a single item is available, it is cited in the text. The most comprehensive sources used are not cited throughout the text but rather listed here. [Cohen95] [Neumann95] Other major sources not identified by specific citation are listed here. [Bellovin92] [Bishop96] [Bellovin89] [Cheswick94] [Cohen91] [Cohen94-2] [Denning82] [Feustal73] [Hoffman90] [Knight78] [Lampson73] [Landwehr83] [Linde75] [Neumann89] [Spafford92]

We describe a cause-effect model of information system attacks and defenses based on the notions that particular threats use particular attacks to cause desired consequences and successful defenders use particular defensive measures to defend successfully against those attacks and thus limit the consequences. Human defenders and attackers also use a variety of different viewpoints to understand and analyze their attacks and defenses, and this notion is also brought to bear. We then describe some analytical methods by which this model may be analyzed to derive useful information from available and uncertain information. This useful information can then be applied to meeting the needs of defenders (or if turned on its head attackers) to find effective and minimal cost defenses (or attacks) on information systems. Next we consider the extension of this method to networks and describe a system that implements some of these notions in an experimental testbed called HEAT.


A Cause-Effect Model of Information System Attacks and Defenses:

There is a common notion of cause and effect that has been debated in philosophical terms many times over the ages. Some believe in fate - the happenings of the universe are predetermined. Some believe in chance - God plays dice with the Universe. Regardless of the underlying nature of things, as a fundamental assumption to further work in this area, we take the position that the world works through a system of causes, mechanisms, and effects. Thus we have the picture of systems shown in figure 1.

In this depiction of our assumption, we assert that Causes (also called threats) use Mechanisms (Previously published under the name Attacks and also called Attack Mechanisms) to produce Effects (also called consequences). Protective Mechanisms (also called Defenses) are used to mitigate harm by acting to limit the causes, mechanisms, or effects.

Our schemes of describing Causes, Mechanisms, Effects, and Defenses, are based on the collection of sets of specific causes, mechanisms, effects, and defenses into names groupings, but underlying each of these sets there are specific actors, mechanisms, and consequences that could be analyzed at a detailed level. Since the sets we describe are not strictly classification schemes, there are many-to-many mappings between specifics and sets in our descriptions.

The viewpoints in our model represent the notion that there may be many properties of elements of our model that can be related, analyzed, and used by people and by automation to deal with the issues of information protection.

We believe that the value in this particular model lies in two areas; the reduction of complexity from a model based on all of the specifics allows meaningful computations to be performed in reasonable times, while the increased differentiation over simplified models containing only a few items (e.g., corruption, denial of services, information leakage) allows useful results to be derived. This belief will, we hope, be justified by the analyses we are able to perform.

We also feel it is important to note that this is not the only model of this sort available today. Many other authors have tried to form similar models and we have borrowed freely from them. Some of the models we have considered in our efforts include John Howard's model, the other models he cited and analyzed in his work, and Donn Parker's Model, particularly in the area of consequences.

The specific mappings used in our analysis vary with time, in large part because attackers and defenders are always learning, and in smaller part because new mechanisms are being discovered over time. We also find new ways of correltating issues over time, and this adds new viewpoints. The current viewpoints include:

The size of the mapping today is on the order of 37x94/3 links from threats to attacks, 94x140/3 links from attacks to defenses, and 28x250/3 links between these items and the viewpoints. The actual count today is about 7,000 links which is stored as a numerical table for analysis purposes. While it is impractical to provide that entire table here, it can be attained from the principal author in electronic form and can be viewed as a linked database on the Internet at http://all.net/

Some Analytical Methods Applicable to the Cause-Effect Model:

Given the above model of cause and effect, we have generated a set of analytical techniques that enable us to do three things:

Indications and Warnings Analysis:

For indications and warnings it is desirable that, based on the information available, a prediction be produced that (1) indicates the possible causes of the observed information and ranks those causes in order of confidence that they could be a cause, (2) based on those indications, predicts other observable mechanisms and consequences associated with those causes, and (3) provides the means to warn potential defenders and victims of consequences prior to those consequences.

The analysis we perform is based on analysis of a set of observed mechanisms. Given a set of mechanisms that are known or thought to be in use, we reverse the cause-effect model to produce a ranking of all causes that could be associated with each of the observed effects. This ranking is based on the confidence level associated with each observed phenomena and the correlation between capabilities and characteristics of the causes and observed mechanisms.

In the current implementation, a selection of mechanisms is made from a menu of checkboxes using a Web browser. For example, we might select the following set of observed mechanisms:

With this selection, analysis yields several results:

Given the resulting metric associated with each cause, we then produce an aggregate metric for each of the mechanisms within the capability and characteristics of those causes. The result is a metric associated with each mechanism indicating how closely it matches the observed phenomena within the context of the model. For the example above:

Given the resulting metric associated with each mechanism, we then produce an aggregate metric for each of the consequences that can result from each of those mechanisms. The result is a metric associated with each consequence indicating how closely it matches the observed phenomena within the context of the model.

These results have been used for several different purposes:

Covering Analysis:

The covering analysis we used is based on the covering analysis used in optimization. The notion of covering analysis is that we have a set of attacks and a set of defenses, where each attack has a metric indicating the relative consequence of its use, each defense has a metric indicating the relative cost of its use, and a covering table indicates whether or to what extent each defense mitigates the consequences of each attack. The analytical challenge is to find an efficient mathematical method for determining an optimal selection of defenses. Optimality can be determined against a set of different measures to produce different results. We have examined two particular measures; (1) minimize the cost for achieving a particular total level of consequence, and (2) minimizing the consequence given a particular budget for defense.

The classic set covering problem has been widely studied because of it's use in airline crew scheduling. We can formulate the defense-allocation problem as a generalization of the set cover problem. Table entries represent the probability that a particular defense can prevent a particular attack. If we assume attack consequences are independent and defense probabilities are independent, then we can calculate the expected consequence from the full set of attacks.

Although this computation is significantly more complicated than classic set cover, we can modify much of the previous work on set cover for this setting. In particular, since set cover is a restrictive special case of the defense-allocation problem, hardness results for set cover apply directly. Thus theoretically, we cannot approximate the best defense set to within a factor of log d, where d is the number of defenses. However, exhaustive methods may be practical for the size of instances generated by small-to-medium sized networks. We have generalized the greedy iterative methods which are asymptotically optimal for set cover. It is also possible that they remain asymptotically optimal in the more general setting. If we also specify a target consequence for each attack rather than all attacks combined, this problem can be modeled as an integer program similar to the set-cover integer program, and therefore, one can apply a generalized versions of the vast number of exact and heuristic methods tuned for this application.

An implementation of covering analysis based on the greedy algorithm is currently implemented. Based on the above example, and with the additional constraints that we wish only to detect corruption and do so with only commercially available mechanisms, we get the following list of potential detection defense mechanisms:

The covering analysis indicates that these methods have the potential of detecting all of the identified mechanisms, indicates that the defenses with the best coverage are (1) time, location, function, and other similar access limitations, (2) redundancy, and (3) filtering devices and that in combination, these cover almost all of the identified attack mechanisms. The detailed results of the covering table indicate which classes of defense mechanisms cover which classes of attack mechanisms and use coloring to indicate uncovered attack mechanisms.

Linguistic Analysis:

The third extension is to describe techniques that can be used to analyze systems based on this model. In this extension, we treat the set of all cause, mechanism, effect, defense, and viewpoint sequences as the set of legal statements within a language. We then take a user-specified sentence in the form of menu selections from the set of possible causes, mechanisms, effects, defensive capabilities, and viewpoints, and produce all valid sentences within the language that meet those specifications. A simple example is the input phrase (selected from menus):

Based on this question, the systems responds with the following set of applicable attack and defense sentences:

The Extension of These Results to Networked Systems:

The fourth extension is to use these techniques to analyze networked systems. In this analysis, we generate an attack graph by identifying the vulnerabilities of each of a set of systems within a network and characterizing the way in which those systems can communicate. The attack graph is, in essence, the set of all possible sequences of mechanisms that an attacker can use to achieve a particular goal within a network. We analyze attack graphs and defense postures (sets of defensive mechanisms that can be placed in the network) to (1) increase the minimum cost of attack within a fixed budget for defense and (2) minimize the cost of defense for a given attack budget. We then do an analysis of this graph to find, for example, a minimum cost cut to the attack graph where cuts are used to represent the effect of protective mechanisms.

The attack graph method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires a database of common attacks broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is then "matched" with the network configuration information and attacker profile to create a "superset" attack graph. Nodes in this graph identify a stage of attack (e.g., the class of machines the attacker has accessed and the privilege levels compromised). Arcs in this graph represent the paths through which mechanisms can be used to change the stage of attack. By assigning costs representing level-of-effort for the attacker (or alternatively probabilities of success) to the arcs, graph analysis algorithms such as shortest-path algorithms can be used to identify the attack paths with the lowest cost (or highest probability) of success. Defense postures (i.e., sets of defensive mechanisms that can be placed in the network) can also be analyzed for their impact on cost (or probability) to increase the minimum cost (or maximum probability) of attack within a fixed budget for defense or minimize the cost (or probability) of successful defense for a given attack budget.

Once the attack graph has been generated, we can also apply analysis methods to determine high-risk attack paths. The graph may also be used to run simulations of attacks and defense both in a batch mode for optimizing a set of fixed defenses or in a real-time mode for predicting the impacts of future attacks given a current situation. This then forms the basis for simulation components of a proposed capability for model-based situation anticipation and constraint for flexible defenses as well as a potential method for use in indications and warnings against information systems and networks.

The CID System:

An automated system (named CID) demonstrates these analyses in useful application. CID is integrated with a set of specially configured hardware, software, and rooms and the HEAT computer network to provide a cyber-warfare experimentation, analysis, training, and gaming environment called the Cyberwarfare Center. In order to understand the role of CID in this environment, we will begin by describing the environment and its uses.

HEAT is a computer network located at Sandia National Laboratories in California. It was originally designed for experimentation with heterogeneous MIMD parallel processing. HEAT consists of more than 60 networked computers (10 each of Suns, SGIs, IBMs, HPs, DECs, and a larger number of PCs) running a wide range of operating systems and versions, intended to provide a rich environment for testing computer and network security systems and methods. To our knowledge, this is the largest computer security testbed in service today, and it is used on a daily basis to test attacks and defenses.

In order to manage attacks and defenses in HEAT at an affordable cost, automation was needed. Coincidentally, an earlier version of CID was, at that time, being rewritten to operate on a combination of an Oracle database server and a Netscape web server. This left several computers previously used for CID development available for use in controlling HEAT. This control is complex enough, and watching what is happening on more than 60 computers simultaneously requires so much display space, that a 'situation room' was constructed for the purposes of being able to carry out this effort. The capabilities of this room included several projection displays, a small network of computers, and a set of tables and chairs. Given the constraints of space and other facilities, the room was designated for multiple uses, including instruction, strategic gaming, and experimentation with HEAT. Thus the Cyberwarfare Center emerged.

Because experiments on attack and defense against live computer networks requires many samples of attack and defense systems, CID, which already had a substantial collection of tools and a database capability was called into service for coordinating the experimental capabilities needed for work on HEAT, the demonstration capabilities and course materials used in training, the scenarios used for gaming, and the analysis used for experimenting with automated flexible defenses. Today, CID is used for all of these purposes and also forms the core of a repository for research and analysis in information security at Sandia's California site.

CID Management of HEAT

In order to manage HEAT, CID provides a Web-based interface that permits a menu selection of the actions to be performed along with check boxes for the HEAT machines the operation is to be performed on. For example, in order to launch automated attacks against a set of HEAT machines, the user might select "FTP attacks" against the checked machines. Those attacks would then be run with results reported to the browser. The table below depicts this interface with bold italics used in place of check boxes.

Use only FTP Attacks on

SGI-1

IBM-1

HP-1

DEC-1

SUN-1

SGI-2

IBM-2

HP-2

DEC-2

SUN-2

SGI-3

IBM-3

HP-3

DEC-3

SUN-3

SGI-4

IBM-4

HP-4

DEC-4

SUN-4

SGI-5

IBM-5

HP-5

DEC-5

SUN-5

SGI-6

IBM-6

HP-6

DEC-6

SUN-6

SGI-7

IBM-7

HP-7

DEC-7

SUN-7

SGI-8

IBM-8

HP-8

DEC-8

SUN-8

SGI-9

IBM-9

HP-9

DEC-9

SUN-9

SGI-10

IBM-10

HP-10

DEC-10

SUN-10

Similar interfaces either exist or are being completed to allow control of processes, control of defenses in the form of wrapper programs, audit extraction and analysis, and attacks that simulate modeled threat profiles. In addition to this manual sort of management, tools are being contemplated for automating the placement and control of defenses based on a technique called model-based situation anticipation and constraint.

This management interface can run on multiple machines simultaneously, thus allowing an attacker and a defender to participate in a simulated cyber-battle with the attacker granted only attack capabilities and the defender granted only defensive capabilities based on access controls available in CID. For demonstrations, the attack and defense can run on different displays in the cyberwafare center’s training and control facility with the center screen reserved for projecting briefing material related to the demonstrated attacks and defenses. Through the use of collaborative tools, attackers and defenders in separate gaming suites can be observed from the control facility with each on a different screen. Game control functions including sending briefing, status reports, orders, and other information to the participants is handled via the central display, observers or referees can watch the process from the central site, and trainees can watch the battle either as it happens or in replay. This permits attackers and defenders to review their actions in much the same way as other sorts of exercises provide feedback.

Future experiments are anticipated using automated and human defenses against automated and human attackers both for training humans and for testing automated technologies.

Other Features of CID

In addition to the elements described above, CID provides several important features. In each of the examples above, as well as throughout CID, details of each of all technical terms are available by pressing on the mouse button. This drill-down capability includes citations to relevant literature which may also have embedded citations. We are in the process of scanning in all of the references related to material in CID for instantaneous access, thus allowing rapid literature search and analysis. CID also has a search engine to allow general searches of content and drill-down material for the purpose of finding examples and other related information. Under attacks and defenses, drill-down capabilities lead to specific examples of techniques, in some cases including source code for attacks and defenses that can be applied directly against HEAT or elsewhere. Detailed drill-down is also provided to allow intel-based detailing of threats and linkage of case examples to all elements of the database. In coming implementations, additional capabilities will be provided for linking alternative classification schemes into CID and allowing the same analysis of those schemes as is provided for CID's internal schemes. In such an online collection, access controls are also required to allow need-to-know access to specific details. CID currently has limited access control on all records and is being augmented to allow fine-grained access control to all database elements including access controlled search and analysis. This permits a user with limited access to do all of the analytical functions based only on the knowledge available, while users with more complete access may find better solutions.

Summary, Conclusions, and Future Work

The use of a cause-effect model for analyzing attacks and defenses in computer networks appears to have a bright future. Initial results seem to indicate that computational complexity can be effectively traded off against model resolution and that this allows analysis of complex networks for interesting security properties to be done. The creation of tools in combinations with experimental testbeds has allowed much of this work to be validated through experiments, but clearly this work is still in its infancy. Future work is currently being proposed to move forward from the initial results shown here toward the design and implementation of increasingly automated and integrated tools for analyzing and managing larger networks more effectively.


Properties of the Classification Scheme

Property1: non-orthagonality - Complexity: This property makes analyzing the space as a whole quite difficult.


Property2: synergistic -

Complexity: Synergistic effects are not yet understood fully in this context, however, this makes analysis of attack and defense quite complex and may make optimization impossible until synergies are better understood.


Property3: non-specificity -

Complexity: In some classes (for example viruses) there are more than 10,000 distinct examples known to exist. The broadness of these classes makes them each a substantial area of research.


Property4: descriptive only -

Complexity: Except in those few cases where mathematics has been developed, it is hard to even characterize the issues underlying these classes, much less attempt to provide comprehensive understandings.


Property5: limited applicability -

Complexity: Dispite this general property, in most cases, analysis is possible and produces reasonably reliable results over a reasonably short time frame.


Property6: incompleteness -

Complexity: We can only form a complete system by completely characterizing these issues in a mathematical form - something nobody has even approached doing to date.


Actors that may Cause Information System Failure (Threats)


Threat1: insiders - Complexity: Insiders typically have special knowledge of internal controls that are unavailable to outsiders, and they have some amount of access. In some cases, they perform only authorized actions - as far as the information systems have been told. They are typically trusted and those in control often trust them to the point where placing internal controls against their attacks are considered offensive.


Threat2: private investigators -

Complexity: Investigators are willing to do a substantial amount of targeted work toward accomplishing their goals, in some cases they may be willing to violate the law, they often have contacts in government and elsewhere that provide information not commonly available, and they commonly use bribes of one form or another to advance their ends.


Threat3: reporters -

Complexity: Reporters often gain access that others do not have, often use misleading cover stories or false pretenses, commonly try to become friendly with insiders in order to get information, and have extraordinary power to publicly punish what they percieve to be or can construe as misdeeds.


Threat4: consultants -

Complexity: Consultants often have insider access but are not controlled as are insiders. Technical consultants who use client information technology present a technical threat, while management consultants who often have access to more of the more sensitive information in a company presents a human threat.


Threat5: vendors -

Complexity: Vendors are often in competition with each other over sales and with you over pricing and terms. They tend to be in long-term relationships and often work closely with your people. Their economic motives are often not alligned with yours and in some cases, they take advantage of information in order to gain economic adantage in negotiations.


Threat6: customers -

Complexity: Customers are often in competition with you over pricing and terms. Their economic motives are often not alligned with yours and in some cases, they take advantage of information in order to gain economic adantage in negotiations. In some cases, customers have worked their way into companies, extracted information, taken over their suppliers' businesses by taking advantage of the knowledge gained through their interactions.


Threat8: competitors -

Complexity: Competitors are commonly percieved as an economic threat, but in large businesses, they are often collaborators on some projects and competitors on others. As a result, information technology is often used to provide access for some purposes. It can be quite tempting to exploit this access and these relationships in competitive areas.


Threat9: whistle blowers -

Complexity: Whistle blowers are often sincere in their beliefs, have insider access, and sometimes have legitimate cases.


Threat10: hackers -

Complexity: While not generally malicious, these people tend to gather and exploit tools that open holes to other attackers. They also sometimes make mistakes or become afraid and feel they have to cover their tracks, thus causing incidental harm.


Threat11: crackers -

Complexity: These people have tools similar to those of hackers, but they use these tools for malicious purposes and can sometimes cause a great deal of harm. They are often bold, and often exploit indirect links to make it hard to trace them back to their source.


Threat12: club initiates -

Complexity: Club initiates commonly use copy-cat attacks with monor modifications. A typical example includes writing minor variants on viruses that bypass a known viruse detector.


Threat13: cyber-gangs -

Complexity: These groups are generally willing to exploit commonly known attacks as well as an occasional novel attack. Perception management and dumpster diving are some of their favorite tools. They are often emboldened by group dynamics.


Threat14: tiger teams -

Complexity: These people are usually honest, but sometimes they are not. In addition, they often fail to properly repair the systems they try to break into, thus leaving residual vulnerabilities. Their skills vary widely, from rank ametures using off-the-shelf software - to true experts with a high degree of sophistocation. It is often hard to tell which is which unless you are an expert.


Threat15: maintenance people -

Complexity: Maintenance people commonly introduce viruses by accident. They often have far more physical access than even highly trusted employees, they are often allowed in sensitive areas alone and at off-hours, they are usually poorly paid and assumed to have little knowledge, and they are often trusted with items of high value.


Threat16: professional theives -

Complexity: Professional theives typically use the best tools they can find, practice ahead of time for major thefts, and use highly coordinated efforts to achieve their goals. They have historically tended toward physical means, but this may be changing.


Threat17: hoodlums -

Complexity: They often extract information in a brutish way, exploiting human frailty and family relationships rather than technical means.


Threat18: vandals -

Complexity: Vandals typically use the path of least resistance, fear being caught, and rapidly flee the scene of the crime.


Threat19: activists -

Complexity: These people can be extremely zealous - even when they are misdirtected. They often consider one viewpoint to the exclusion of all others, try to maximize harm to their victim without regard to competitive issues or personal gains, and typically use physical means - sometimes with the additional element of publicity as part of their motive.


Threat20: crackers for hire -

Complexity: These people combine technical skills, tools, and money, and can be quite successful, hard to trace, and difficult to defend against.


Threat21: deranged people -

Complexity: The sky is the limit with a person who doesn't act rationally. The danger is heightened when combined with other threat elements.


Threat22: organized crime -

Complexity: These people tend to have money (but usually don't want to spend it on information system attacks), use physical threats to get what they want, and exploit human weaknesses.


Threat23: drug cartels -

Complexity: These groups typically have a lot of money and are willing to spend it in order to get what they want. They typically want to launder money, eliminate competition, retain control over their dealer networks, and keep law enforcement away. They use violence and physical coorcion easily.


Threat24: terrorists -

Complexity: These people typically blow things up and target things that have maximum publicity and effect on everyday peoples' perceptions of safety.


Threat25: industrial espionage experts -

Complexity: These people tend to be highly skilled, well paid, and stealthy. They tend to use subtle techniques rather than brute force.


Threat26: foreign agents and spies -

Complexity: These people are highly trained, highly funded, backed by substnatial scientific capabilities, directed toward specific goals, and skillful at avoiding detection. They can be very dangerous to life and property.


Threat27: police -

Complexity: These people often have powers of search and seizure, are usually poorly paid, wield guns, have powers of arrest, and in much of the world are easily corrupted. They tend to use physical means.


Threat28: government agencies -

Complexity: These groups are highly funded, often made up largely of professionals, they commonly have indirect powers of search and seizure, sometimes wield guns, have indirect powers of arrest, and in much of the world are easily corrupted. They often use highly sophistocated means.


Threat29: infrastructure warriors -

Complexity: These groups typically have access to accurate weapons and high explosives, they are oriented toward causing serious physcial harm, often have the goal of causing permanent harm, do not hesitate to kill people, and act at the behest of governments, and with their full and open support.


Threat30: economic rivals -

Complexity: While economic rivals are usually merely competitive, sometimes they become rather extreme in their desire for technical information and attack in order to gain technical expertise. They tend to be well funded, have a lot of expertise, and typically operate from locations which provide legal cover for their actions.


Threat31: nation states -

Complexity: When countries decide to attack other countries in the information arena, they often use stealth to try to provide for plausible deinability, however this is not always the case, and they often fail to achieve true anonymity. Responses may lead to escalation - and in some cases - escalation can lead to full-scale war.


Threat32: global coalitions -

Complexity: Global coalitions - of corporations, groups, countries, cartels, and other bodies - combine their forces to increase their impact and make it harder to fight them off.


Threat33: military organizations -

Complexity: Militaries tend to blow things up, however, in the more advanced military organizations, information is exploited to maximize their advantage and neutralize opponent capabilities. Physical destruction is often avoided in order to preserve infrastructure used after the conflict has ended. They tend to have and use exotic as well as every-day capabilities.


Threat34: paramilitary groups -

Complexity: Paramilitary groups, malicias, and similar organizations tend to be poorly funded and oriented toward physcical destruction.


Threat35: information warriors -

Complexity: Information warriors may use any or all of the known techniques as well as techniques developed especially for their use and kept secret in order to attain miltary advantage. They tend not to kill people unnecessarily.


Threat36: extortionists -

Complexity: Extortion is commonly used to get money in exchange for not causing harm. It is closely related to kidnapping.


Threat37: nature -

Complexity: Most natural phenomena can be characterized by statistics and dealt with using probabilistic techniques.

Mechanisms by Which Information Systems are Caused to Fail (Attacks)


Attack1: errors and omissions -

Attack2: power failure -

Complexity: Power failure is not usually a complex issue to address, although the underlying causes may be.


Attack3: cable cuts -

Complexity: The general issue of cable cutting is quite complex and appears to involve solving many large min-cut problems.


Attack4: fire -

Complexity: The fire issue is not normally a very complex one.


Attack5: flood -

Complexity: Although floods are generally considered relatively simple issues to address, their are occasionally somewhat more complex flooding issues than are anticipated.


Attack6: earth movement -

Complexity: Statistical techniques and historical data appear to be quite sufficient to analyze Earth movement.


Attack7: solar flares -

Complexity: Statistical techniques and historical data appear to be quite sufficient to analyze solar flares.


Attack8: volcanos -

Complexity: Statistical techniques and historical data appear to be quite sufficient to analyze volcanos.


Attack9: severe weather -

Complexity: Statistical techniques and historical data appear to be quite sufficient to analyze severe weather.


Attack10: static -

Complexity: The static issue is not normally a very complex one.


Attack11: environmental control loss -

Complexity: Statistical techniques and historical data appear to be quite sufficient to analyze environmental control losses in most cases.


Attack12: relocation -

Complexity: Statistical techniques and historical data appear to be quite sufficient to analyze relocation.


Attack13: system maintenance -


Attack14: testing -

Complexity: Testing issues are quite complex, and some well-known testing problems are exponential in time and space. Much of the current analysis of protection testing is based on naive assumptions.


Attack15: inadequate maintenance -


Attack16: Trojan horses -

Complexity: Detecting Trojan horses is almost certainly an undecidable problem (although nobody has apparently proven this it seems clear) but inadequate mathematical analysis has been done in this subject to provide further clarification.


Attack17: dumpster diving -

Complexity: Statistical techniques and historical data appear to be quite sufficient to analyze dumpster diving.


Attack18: fictitious people -

Complexity: This appears to be a very complex social, political, and analytical issue that is nowhere near being solved.


Attack19: protection missetting exploitation -

Complexity: Setting protections properly is not a trivial matter, but there are linear time algorithms for automating settings once there is a decision procedure in place to determine what values to set protection to. No substantial mathematical analysis has been published in this area and no results have been published for the complexity of building a decision procedure, however it is known that, under some conditions, it is impossible to have settings that both provide all appropriate access and deny all inappropriate access. [Cohen91] It is known to be undecidable for a general purpose subject/object system whether a given subject will eventually gain any particular right over any particular object. [Harrison76]


Attack20: resource availability manipulation -

Complexity: Most of the issues with resource availability result from the high cost of making worst-case resources available. As a result, a tradeoff is made in the design of systems that assures that under some (hopefully unlikely) conditions resources will be exhausted while providing a suitably high likelihood of availability under almost all realistic situations. The general complexity involved with most resource allocation problems in which limited resources are available is at least NP-complete.


Attack21: perception management a.k.a. human engineering -

Complexity: This has been a security issue since the beginning of time and appears to be a very complex human, social, political, and legal issue. No substantial progress has been made to date in resolving this issue.


Attack22: spoofing and masquerading -

Complexity: Although no deep mathematical analysis of this area has been published to date, it appears that this issue does not involve any difficult mathematical limitations. Limited results in providing secure channels have indicated that such a process is not complex but that it may depend on cryptographic techniques in some cases, which lead to substantial mathematical issues.


Attack23: infrastructure interference -

Complexity: Although no mathematical analysis has been published on this issue to date, it appears that analyzing infrastructure interference is quite complex and involves analysis of all of the infrastructure dependencies if the attack is to be directed and controlled. Similarly, the detection and countering of such an attack appears to be quite complex. It would appear that this is at least as complex as solving multiple large min-cut problems. Some initial analysis of U.S. information infrastructure dependencies has been done and has led to a report of about 1,000 pages which only begins to touch the surface of the issue. [SAIC-IW95]


Attack24: infrastructure observation -

Complexity: Except in cases where cryptography, spread spectrum, or other similar technology is used to defend against such an attack, it appears that infrastructure observation is simple to accomplish and expensive to detect. No mathematical analysis has been published to date.


Attack25: insertion in transit -

Complexity: Although there appears to be a widespread belief that insertion in transit is very difficult, in most cases it is technically straight forward. Complexity only arises when defensive measures are put in place to detect or prevent this sort of attack.


Attack26: observation in transit -

Complexity: Except in cases where cryptography, spread spectrum, or other similar technology is used to defend against such an attack, it appears that observation in transit over physically insecure communications media is simple to accomplish and expensive to detect. In cases where the media is secured (e.g., interprocess communication within a single processor under a secure operating system) some method of getting around any system-level protection is also required.


Attack27: modification in transit -

Complexity: Modification in transit is roughly equivalent in complexity to the combination of observation in transit and insertion in transit, however, because of the real-time requirements for some sorts of modification in transit, the difficulty of successful attack may be significantly increased.


Attack28: sympathetic vibration -

Complexity: In some underdamped systems, sympathetic vibration is easily induced. It sometimes even happens accidentally. In over-damped systems, sympathetic vibration requires additional energy. In logical systems - such as protocol driven networks - the complexity of finding an oscillatory behavior is often very low. A simple search of the Internet protocols leads to several such cases. More generally, finding such cases may involve N-fold combinations of protocol elements which is exponential in time and linear in space. Proving that protocols are free of such behaviors is known to be at least NP-complete. [Bochmann77] [Danthine82] [Hailpern83] [Merlin79] [Palmer86] [Sabnani85] [Sarikaya82] [Sunshine79]


Attack29: cascade failures -

Complexity: Only cursory examination of select cascade failures has been completed, but initial indications are that the complexity of creating a cascade failure varies with the situation. In systems operating at or near capacity, cascade failures are easily induced and must be actively prevented or they occur accidentally. [WSCC96] As systems move further away from being tightly coupled and near capacity, cascade failures become for more difficult to accomplish. No general mathematical results have been published to date, but it appears that analyzing cascade failures is at least as complex as fully analyzing the networks in which the cascades are to be created, and this is known for many different sorts of networks.


Attack30: bribes and extortion -

Complexity: This issue is as complex as the general problem of insider attacks. It appears to be uncharacterizable mathematically, but may be modeled by statistical techniques.


Attack31: get a job -

Complexity: This issue is as complex as the general problem of insider attacks. It appears to be uncharacterizable mathematically, but may be modeled by statistical techniques.


Attack32: password guessing -

Complexity: Password guessing has been analyzed in painstaking detail by many researchers. In general, the problem is as hard as guessing a string from a language chosen by an imperfect random number generator. [Cohen85] The complexity of attack depends on the statistical properties of the generator. For most human languages there are about 1.2 bits of information per symbol, [Shannon49] so for an 8-symbol password we would expect about 9.8 bits of information and thus an average of about 500 guesses before success. Similarly, at 2 attempts per user name (many systems use thresholds of 3 bad guesses before reporting ana anomaly) we would expect entry once every 250 users. For 8-symbol passwords chosen uniformly and at random from an alphabet of 100 symbols, 5 quadrillion guesses would be required on average.


Attack33: invalid values on calls -

Complexity: In most cases, only a few hundred well-considered attempts are required to find a successful attack of this sort against a program. No mathematical theory exists for analyzing this in more detail, but a reasonable suspicion would be that several hundred common failings make up the vast majority of this class of attacks and that those sorts of flaws could be systematically attempted. There is some speculation that software testing techniques [Lyu95] could be used to discover such flaws, but no definitive results have been published to date.


Attack34: undocumented or unknown function exploitation -

undocumented system calls commonly inserted by vendors to enable special functions resulting in economic or other market advantages, and program sequences accessible in unusual ways as a result of improperly terminated conditionals.> undocumented system calls commonly inserted by vendors to enable special functions resulting in economic or other market advantages, and program sequences accessible in unusual ways as a result of improperly terminated conditionals. Complexity: Back-doors and other intentional functions are normally either known or not known. If they are known, the attack takes little or no effort. Finding back-doors is probably, in general, as hard as demonstrating program correctness or similar problems that are at least NP-complete and may be nearly exponential depending on what has to be shown. There is some speculation that decision and data flow analysis might lead to the detection of such functions, but no definitive results have been published to date.


Attack35: inadequate notice exploitation -

Complexity: Notice is trivially demonstrated to be given or not given depending on the method entry. The most effective overall protection from this sort of exploit would be the change of laws regarding certain classes of attacks.


Attack36: excess privilege exploitation -

Complexity: Determining whether a privileged program grants excessive capabilities to an attacker appears, in general, to be as hard as proving program correctness, which is at least NP-complete and may be nearly exponential depending on what has to be shown. Determining what privileges a program should be granted and has been granted may be somewhat easier but no substantial analysis of this problem has been published to date.


Attack37: environment corruption -

Complexity: In most computing environments, there are only a relatively small number of ways that environment variables get set or used. This limits the search for such vulnerabilities substantially, however, the ways in which environmental variables might be used by programs in general is unlimited. Thus the theoretical complexity of identifying all such problems would likely be at least NP-complete. This would seem to give computational leverage to the attacker.


Attack38: device access exploitation -

Complexity: Since hardware devices are, in general, at least as complex as software devices, the complexity of detecting such a flaw would appear to be at least NP-complete. Injecting such a flaw, on the other hand, appears to be quite simple - given physical access to a device.


Attack39: modeling mismatches -

Complexity: There is some theory about the adequacy of modeling, however, there is no general theory that addresses the protection-related issues of modeling flaws. This appears to be a very complex issue.


Attack40: simultaneous access exploitations -

Complexity: This problem has been analyzed in a cursory fashion and the number of possible sequences of events appears to be factorial in the combined lengths of the programs coexisting in the environment. [Cohen94-3] Clearly a full analysis is infeasible for even simplistic situations. It is closely related to the interrupt sequence mishandling problem.


Attack41: implied trust exploitation -

Complexity: In general, analyzing this problem would seem to require analyzing all of the interdependencies of programs. In today's networked environment, this would appear to be infeasible, but no detailed analysis has been published to date.


Attack42: interrupt sequence mishandling -

Complexity: This problem has been analyzed in a cursory fashion and the number of possible sequences of events appears to be factorial in the combined lengths of the programs coexisting in the environment. [Voas93] Clearly a full analysis is infeasible for even simplistic situations. It is closely related to the simultaneous access exploitation problem.


Attack43: emergency procedure exploitation -

Complexity: In most cases, emergency procedures bypass many normal controls, and thus many attacks are granted during an emergency that would be far more difficult during normal operations. No complexity measure has been made of this phenomena to date.


Attack44: desychronization and time-based attacks -

Complexity: This problem appears to be similar in complexity to the interrupt sequence mishandling problem. [Voas93] It appears, in general, to be factorial in the number of time-based decisions made in a system, however, their may be substantial results in the field of communicating sequential processes that lead to far simpler solutions for large subclasses.


Attack45: imperfect daemon exploits -

Complexity: In general, this problem is at least as complex as proving program correctness, which is at least NP-complete and may be nearly exponential depending on what has to be shown. Only a few daemons have ever been shown to avoid large subsets of these exploits [Cohen97] and those daemons are not widely used.


Attack46: multiple error inducement -

Complexity: The limited work on multiple error effects indicates that even the most well-designed and trusted system fail unpredictably under multiple error conditions. This problem appears to be even more complex than proving program correctness, perhaps even falling into the factorial time and space realm. For an attacker, producing multiple errors is often straightforward, but for a defender to analyze them all is essentially impossible under current theory.


Attack47: viruses -

Complexity: Virus detection has been proven to be undecidable in the general case. [Cohen86] [Cohen84] Viruses are also trivial to write and highly effective against most modern systems.


Attack48: data diddling -

Complexity: Data diddling is a relatively simple task. If the data is writable, it can be easily diddled, and if it is not writable, diddling is impossible until this condition changes.


Attack49: van Eck bugging -

Complexity: van Eck bugging is relatively easy to do and requires only cursory knowledge of electronics and antennae theory.[vanEck85]


Attack50: electronic interference -

Complexity: Simplistic jamming is straight forward, however, power efficient jamming is necessary in order to have good effect against spread spectrum and similar anti-jamming systems, and this is somewhat more complex top achieve.


Attack51: PBX bugging -

Complexity: In cases where functions that support bugging are provided by the PBX, this attack is straight forward. In cases where no such function is provided, it is essentially impossible. Determining which is the case is non-trivial in general, but in practice it is usually straightforward.


Attack52: audio/video viewing -

Complexity: Audio and video viewing attacks normally depend on breaking into the operating system and then enabling a built-in function. The complexity lies primarily in breaking into the system and not in turning on the viewing function.


Attack53: repair-replace-remove information -

Complexity: This attack requires involvement in the repair process and is normally not directed at a particular victim from its inception but rather directed toward an audience (market segment). There is little complexity involved in carrying out the attack once the position as a repair provider is established.


Attack54: wire closet attacks -

Complexity: Wire closet attacks require only technology knowledge, access to the wire closet, and a goal. The complexity of finding the proper circuits to attack is normally within the knowledge level of a telephone service person or other wire-person.


Attack55: shoulder surfing -

Complexity: This is a trivial attack to carry out.


Attack56: data aggregation -

Complexity: Data aggregation can be quite complex both to perform and to protect against. Some work on protecting against these attacks has led to identifying NP-complete problems, while gathering information through this technique may involve solving a large number of equations in a large number of unknowns and is similar to integer programming problems in complexity.


Attack57: process bypassing -

Complexity: This attack is often accomplished by a relatively unsophisticated attacker using only knowledge gained while on the job. The complexity of many such attacks is low, however, in the general case it may be quite difficult to assure that no such attacks exist without a particular level of collusion. Not formal analysis has been published to date.


Attack58: content-based attacks -

Complexity: Many content-based attacks are quite simple or are easily derived from published information. They tend to be quick to operate and simple to program. More sophisticated attacks exploiting a content-based flaw may require far more attack prowess. No mathematical analysis has been published of this class of attacks to date.


Attack59: backup theft, corruption, or destruction -

Complexity: Except in cases where backup information is encrypted, back-up attacks are straightforward and introduce little complexity. In the case of aging backup tapes some signal processing capabilities may be required in order to reliably read sections of media, but this is not very complex or expensive.


Attack60: restoration process corruption or misuse -

Complexity: Creating fake backups may be complicated by having to reproduce much of what is present on actual backups on the particular site, by having to create CRC codes for replaced components of a backup and by having to recreate an overall CRC code for the entire backup when altering only one component. None of these operations are very complex and all can be accomplished with near-linear time and space techniques.


Attack61: hangup hooking -

Complexity: These classes of attacks are normally simple to carry out with probabilistic effects depending on the environment.


Attack62: call forwarding fakery -

Complexity: This class of attacks are relatively simple to carry out but often require a precondition of breaking into a system involved in the forwarding operation.


Attack63: input overflow -

Complexity: In the case of denial of service, these attacks are trivial to carry out with a high probability of success. If the attacker wishes to gain access for more specific results, it is usually necessary to identify characteristics of the system under attack and create a customized attack version for each victim configuration. This is not very complex but it is time and resource consumptive.


Attack64: illegal value insertion -

Complexity: Most such attacks are easily carried out once discovered, but systematically discovering such attacks is, in general, similar to the complexity of gray box testing until the first fault is found.


Attack65: residual data gathering -

Complexity: Residual data gathering in the case of simple undeletions or allocating large volumes of space and examining their content is straightforward. Looking for residual data on magnetic media using electromagnetic measurements and electron microscopy is somewhat m,ore complex and requires statistical analysis and correlation of signals in a signal processing component. While this is not trivial, it is within the capability of most electrical engineers and electronics specialists.


Attack66: privileged program misuse -

Complexity: Once a vulnerability has been identified, exploitation is straightforward. Systematically discovering such attacks is, in general, similar to the complexity of gray box testing until the first fault is found.


Attack67: error-induced misoperation - Complexity: Many of these attacks appear to be trivial to accomplish.


Attack68: audit suppression -

Complexity: This class of attacks has not been thoroughly analyzed from a mathematical standpoint, but it appears that in most systems, audit trail suppression is straightforward. It may be far more difficult to accomplish this in a system designed to provide a high assurance of audit completeness.


Attack69: induced stress failures -

Complexity: Although some attacks of this sort appear to be available without substantial effort, in general, understanding the implications of stress on multiprocessing systems is beyond the current theory. It appears from a cursory examination that this is at least as complex as the interrupt sequence problem which appears to be factorial in the number of instructions in each of the simultaneous processes.


Attack70: hardware failure - system flaw exploitation -

Complexity: Discovering hardware flaws is, in general, similar in complexity to discovering software flaws, which makes this problem at least NP-complete.


Attack71: false updates -

Complexity: This attack appears to be easily carried out against many installations and examples have shown that even well-trained and adequately briefed employees fail to prevent such an attack. In cases where relatively secure distribution techniques are used, the complexity may be driven up, but more often than not, the addition of a disk will bypass even this sort of process.


Attack72: network service and protocol attacks -

Complexity: Analyzing protocol specifications to find candidate attacks appears to be straightforward and implementing many of these attacks has proven within the ability of an average programmer. In general, this problem would appear to be as complex as analyzing protocols which has been studied in depth before and shown to be at least NP-complete for certain subclasses of protocol elements.


Attack73: distributed coordinated attacks -

Complexity: Devising DCAs appears to be simple while tracing a DCA to a source can be quite complex. Early results indicate that tracking a DCA to a source is exponential in the number of intermediaries involved, while detecting a high-volume DCA appears to be straightforward.


Attack74: man-in-the-middle -

Complexity: Man-in-the-middle attacks normally require the implementation of a near-real-time capability, but there are no mathematical impediments to most such attacks.


Attack75: selected plaintext -

Complexity: Selected plaintext attacks have differing complexity depending on the system under attack. Attacks on RSA systems have been shown to be linear in time and polynomial in space.


Attack76: replay attacks -

Complexity: Replay attacks are typically simple to perform and require little or no sophistication. In some cases, relatively complex coding may be required in order to reproduce CRC codes or checksums, but this is normally not required for replay attacks.


Attack77: cryptanalysis -

Complexity: Cryptanalysis is a widely studies mathematical area and typically involves a great deal of expertise and computing power against modern cryptographic systems. Cryptanalysis of improperly designed systems and of systems more invented before the 1940s is almost universally accomplished by relatively simple automation.


Attack78: breaking key management systems -

Complexity: Many key management attacks require a substantial amount of computing power, but this is normally on the order of only a few million computations to break a key that could not be broken exhaustively under any feasible scheme. The complexity of these attacks tends to be specific to the particular key management system. In many cases, the weakest link is the computer housing the keys and this is often attacked in a relatively small amount of time through other techniques.


Attack79: covert channels -

Complexity: It has been shown that in any system using shared resources in a non-fixed fashion, covert channels exist. They are typically easy to exploit using Shannon's communications theory to provide an arbitrary reliability at a given bandwidth based on the channel bandwidth and signal to noise ratio of the covert channel. Avoiding detection depends primarily on remaining below the detection threshold used by detection techniques to try to detect covert channel activity.


Attack80: error insertion and analysis -

Complexity: The complexity of error insertion is not known, however many researchers have recently claimed to have produced efficient and reliable insertion techniques. Th mathematics in this area is quite new and definitive results are still pending.


Attack81: reflexive control -

Complexity: The concept of reflexive control is easily understood, and for simplistic automated response systems, finding exploitations appears to be quite simple, but there has been little mathematical work in this area (other than general work in control theory) and it is premature to assess a complexity level at this time. In general, it appears that this problem may be related to the problems in producing and analyzing cascade failures in that causing desired reflexive reaction with a reasonable degree of control may be quite complex.


Attack82: dependency analysis and exploitation -

Complexity: The analysis of dependencies appears to require substantial detailed knowledge of an operation or similar operations. Finding common critical dependencies appears to be straightforward, but producing desired and controllable effects may be more complex. Mathematical analysis of this issue has not been published to date. Common mode faults and systemic flaws are of particular utility in this sort of attack.


Attack83: interprocess communication attacks -

Complexity: Interprocess communication attacks oriented toward disruption appear to be easily accomplished, but no mathematical analysis of this class of attacks has been published to date.


Attack84: below-threshold attacks -

Complexity: Remaining below detection thresholds is straightforward if the thresholds are known and not possible to guarantee if they are unknown. In most cases, estimates based on comparable policies or widely published standards are adequate to accomplish below-threshold attacks.


Attack85: peer relationship exploitation -

Complexity: Exploiting peer relationships appears to be easily accomplished, requiring only a cursory examination of history for a set of candidate peers and trial and error for exploitation.


Attack86: inappropriate defaults -

Complexity: It may be quite difficult to create a comprehensive lists of appropriate defaults for any nontrivial system because the optimal settings are determined by the application. No substantial mathematics has been done on analyzing the complexity of finding proper settings, but many lists of improper defaults published for select operating systems appear to require only linear time and space with the number of files in a system in order to verify and correct mis-settings.


Attack87: piggybacking - Complexity: No published measures of complexity for piggybacking attacks have been made to date, however, certain types of these attacks appear to be trivially carried out.


Attack88: collaborative misuse -

Complexity: Collaborative misuse has not been extensively analyzed mathematically, but limited analysis has been done from a standpoint of identifying effects of collaborations on leakage and corruption in POset networks and results indicate that detecting or limiting collaborative effects is not highly complex if the individual attacks are detectable.


Attack89: race conditions -

Complexity: Race conditions are not easy to detect. In general, they require at least NP-complete time and space and may require factorial time in some cases. Some automated analysis tools have been implemented to detect certain classes of race conditions in source code and have shown promise.


Attack90: strategic or tactical deceptions -

Complexity: In general deceptions comprise a complex class of techniques, some subclasses of which are known to be undecidable to detect and trivial to create, other subclasses of which of which have not been analyzed.


Attack91: combinations and sequences -

Complexity: Combinations and sequences of attacks are at least as complex as their individual components, and may be more complex to create in coordination. Detection may be less complex because detection of any subset or subsequence may be adequate to detect the combined attack. This has not been studied in any mathematical depth to date.


Attack92: kiting -

Complexity: The complexity of kiting schemes has not been mathematically analyzed in published literature to date, but indications from actual cases are that substantial computing power is required to track substantial kiting schemes. The first case where such a scheme was detected and prosecuted was detected because the kiter's computer failed for a long enough period of time that the set of transactions and delays could no longer be tracked - the kite fell out of the sky.


Attack93: salami attacks -

Complexity: Attacks of this sort are relatively easy to create. Mathematical analysis of the general class of salami attacks has not been done but it seems likely to be similar in complexity to analysis of data aggregation effects.


Attack94: repudiation -

Complexity: Repudiation has been addressed with cryptographic techniques, but for the most part, these techniques are easily broken - in the sense that a person wishing to repudiate a future transaction can always act to make repudiation supportable. In stock trades, this problem has been around for a long time and is primarily addressed by recording all telephone calls (which form the basis for each transaction) and using the recorded message to resolve the issue when a disagreement is identified.


Mechanisms Which May Prevent, Limit, Reduce, or Mitigate Harm (Defenses)


Defense1: strong change control - Complexity: Strong change control typically tripples programming costs, does not permit rapid changes, and prohibits the use of macros and other similar general-purpose mechanisms now widely embedded in software environments. It also prohibits programming in the production system, requires a strong management commitment to the effort, and is limited to environments where the cost and inconvenience is acceptable.

Attack13


Defense2: waste data destruction -

Complexity: In most cases, well known and reasonably inexpensive techniques are available for destruction of waste. Cost and complexity goes up for destruction of some materials in a very short time frame with very high assurance, but this is rarely necessary.


Defense3: detect waste examination -

Complexity: Detecting some classes of waste examination may be quite complex. For example, detecting the gathering of waste product through collection of electromagnetic emanations may be quite hard. In most cases, relatively low-cost solutions are available.


Defense4: sensors - Complexity: The creation and placement of sensors is usually a reasonably simple matter, but analyzing sensor data and acting upon it is a substantially more complex issue.


Defense5: background checks -

Complexity: The cost of performing background checks can range from as low as a few hundred dollars to tens of thousands of dollars, depending on the depth and level of detail desired and the activities involved in the individual's history.


Defense6: feeding false information -

Complexity: Generating plausible misleading information can be difficult and expensive, but it is often neither.


Defense7: effective mandatory access control -

Complexity: Dispite more than fifteen years of substantial theoretical and development efforts and hundreds of millions of dollars in costs, almost no systems to date have been built that provide fully effective mandatory access control for general purpose computing with reasonable performance. This appears to involve many undecidable problems and some theoretical limitations that appear to be impossible to fully resolve. Examples of unsolvable problems include perfect access control decisions and non-fixed shared resources without covert channels. Highly complex problems include viruses, data aggregation controls, and unlimited granularity in access control decision-making.


Defense8: automated protection checkers and setters -

Complexity: If proper protection settings can be decided by a fixed time algorithm, it takes linear time to check (and/or set) all of the protection bits in a system. Making a decision of the proper setting may be quite complex and may interact in non-trivial ways with the design of programs. In many cases, commonly used programs operate in such a way that it is impossible to set privileges properly with regard to the rest of the system - for example database engines may require unlimited read access to the database while internal database controls limit access by user. Since external programs can directly read the entire database, protection should prohibit access by non-privileged users, but since the database fails under this condition, protection has to be set incorrectly in order for other functions to work.


Defense9: trusted applications -

Complexity: Trust is often given but rarely fully deserved - in programs that is. The complexity of writing and verifying a trusted program is at least NP-complete. In practice, only a few hundred useful programs have ever been proven to meet their specifications and still fewer have had specifications that included desirable security properties.


Defense10: isolated sub-file-system areas -

Complexity: Implementing this functionality is not very difficult, but implementing it so that it cannot be bypassed under any conditions has proven unsuccessful. There appears to be no fundamental reason that this cannot be done, but in practice, interaction with other portions of the system is almost always required - for example - in order to perform input and output, to afford interprocess communication, and to use commonly available system libraries.


Defense11: quotas -

Complexity: Implementing quotas is straight forward and relatively reliable, but it is rarely used in distributed computing environments because of the control complexity and it is widely considered Draconian by users who encounter qouta limits when trying to do legitimate work.


Defense12: properly prioritized resource usage -

Complexity: The resource allocation problem is well known to be at least NP-complete, and in practice, optimal resource allocation is not feasible. Resource limitations are usually addressed by providing more and more resources until the problems go away - but under malicious attack, this rarely works.


Defense13: detection before failure -

Complexity: The general area of fault detection includes many NP-complete, exponential, and factorial problems. The complexity of many other such problems, however, is well within thenormal computing capability available today. These issues have to be addressed on a case-by-case basis.


Defense14: human intervention after detection -

Complexity: Reactive human responses are limited to situations in which real-time (within a small number of computer cycles) response is not required, and is limited by the skils of the defenders.


Defense15: physical security -

Complexity: Physical security is expensive and never perfect, but without physical security, any other type of information protection is infeasible. The best attainable physical security has the effect of restricting attacks to insiders acting with expert outside assistance.


Defense16: redundancy -

Complexity: Redundancy costs, and more redundancy costs more. If privacy is to be retained, redundant systems increase the risk of exposure and must also be protected. Avoiding common-mode failures requires that redundancy be implemented using separate and different methods, and this increases costs still further. Redundancy must be analyzed and considered on a case-by-case basis.


Defense17: uninterruptable power supplies and motor generators -

Complexity: This is standard off-the-shelf equipment in widespread use. Periodic verification and testing is needed in order to assure proper operation over time.


Defense18: encryption -

Complexity: Shannon's 1949 paper [Shannon49] on cryptanalysis asserted that, with the exception of the perfect protection provided by the on-time-pad, cryptography is based on driving up the workload for the attacker to break the code. The goal is to create computational leverage so that the encryption and decryption process are relatively easy for those in posession of the key(s) while the same process for those without the key(s) is relatively hard. Proper use of cryptography requires proper key management, which in many cases is the far harder problem. Encryptions algorithms which provide the proper leverage are now quite common.


Defense19: overdamped protocols -

Complexity: The complexity of analyzing a protocol for being overdamped is closely related to livelock and deadlock analysis which is probably NP-complete for most current protocols. Nevertheless, protocol analysis is a well-studied field and it is feasible to assure that protocols are overdamped.


Defense20: temporary blindness -

Complexity: No detailed analysis of this technique has been performed to date.


Defense21: fault isolation -

Complexity: If designed into a system, fault isolation is feasible. In cases where fault isolation was not previously considered, a lot of effort may be required to implement isolation - primarily because nobody knows the list of links to be cut in order to form the partition, the location of the physical links that need to be severed, or the effect of partitioning on the two subsets of the network.


Defense22: out-of-range detection -

Complexity: In properly designed systems, legitimate ranges for values should be known and violations should be detectable. Many systems with limited detection capability turn off bounds and similar checking for performance reasons, but few make the implications explicit.


Defense23: reintegration -

Complexity: Reintegration is relatively straight forward if properly planned before hand. Otherwise, it may be quite difficult, and sometimes even impossible, to fully reintegrate.


Defense24: training and awareness -

Complexity: It's pretty straight forward to make people aware and provide them with defenses against most classes of attacks directed toward them.


Defense25: policies -

Complexity: Making and disseminating policy has organizational and interpersonal complexity, but is easily implemented.


Defense26: rerouting attacks -

Complexity: All of these techniques are easily implemented at some level.


Defense27: standards -

Complexity: Standards tend to reduce the complexity of meeting assurance requirements by structuring them and sharing the work. They also tend to make interoperability easier to attain.


Defense28: procedures -

Complexity: Developing and implementing standard procedures is not difficult and can be greatly aided by the use of standard procedure notebooks, checklists, and other similar aides.


Defense29: auditing -

Complexity: Generating audit records is not difficult. Care must be taken to secure the audit records from illicit observation and disruption and to prevent audit trails from using excessive time or space.


Defense30: audit analysis -

Complexity: Analyzing audit trails can be quite complex and several NP-complete audit analysis problems appear to have been found.


Defense31: misuse detection -

Complexity: In general, misuse detection appears to be undecidable because it potentially involves detecting all viruses which is known to be undecidable.


Defense32: anomoly detection -

Complexity: In general, anomoly detection involves a tradeoff between false positives and false negatives. [Liepins92]


Defense33: capture and punishment -

Complexity: There are considerable technical complexities in tracing detected attacks and many legal complexities in punishment, particularly when international issues are added to the mix.


Defense34: improved morality -

Complexity: Unknown


Defense35: awareness of implications -

Complexity: Unknown


Defense36: periodic reassessment -

Complexity: Periodic reassesment is easily accomplished.


Defense37: least privilege -

Complexity: This is sometimes difficult to achieve in practive because the granularity of control is inadequate to the granularity of functions performed by individuals.


Defense38: financial situation checking -

Complexity: Privacy and cost issues abound.


Defense39: good hiring practices -

Complexity: It's hard to do this right.


Defense40: separation of duties -

Complexity: Analyzing and implementing such controls are not diffucult but may involve increased cost.


Defense41: separation of function -

Complexity: In practice, efficiency often takes precidence over effectiveness and separation of function is typically placed lower in priority.


Defense42: multi-person controls -

Complexity: Although some such systems are quite complex, there is no fundamental complexity with such a system that prohibits its use when the risk warrants it.


Defense43: multi-version programming -

Complexity: This technique typically multiplies the cost of implementation by the desired level of redundancy and substantially increases the requirement for detailed specification.


Defense44: hard-to-guess passwords -

Complexity: There are no difficult complexity issues in this technology, but there are some human limitations that limit the value of this technique.


Defense45: augmented authentication devices time or use varient -

Complexity: Dispite several potential vulnerabilities associated with these devices, they are basically encryption devices used for authentication and the complexity issues are similar to those involved in other areas of cryptography.


Defense46: biometrics -

Complexity: While the overall field of biometric identification and authentication is quite complex, these methods are normally used to authenticate idenitity or differentiate between identities from relatively small populations (typically a few thousand individuals), and this process is feasible with fairly common data correlation techniques.


Defense47: authorization limitation -

Complexity: Limiting authorization is not complex, however, arbitrarily complex restrictions may be devised and current techniques do not provide for consistency or completeness checks on complex authorization limitation rules.


Defense48: security marking and/or labeling -

Complexity: Labeling is easy to do, but requires systematic procedures that are rigorously followed.


Defense49: classifying information as to sensitivity -

Complexity: The process of determining proper classificiation of information is quite complex and normally requires human experts who are trained in the field.


Defense50: dynamic password change control -

Complexity: This makes things a bit harder on the users.


Defense51: secure design -

Complexity: It's much harder to make things secure than to make them functional. Nobody knows exactly how much harder, but there is some notion that making something secure might imply verifying the security properties, and this is known to be at least NP-complete.


Defense52: testing -

Complexity: Testing abounds with complexity issues. For example complete tests are almost never feasible and methods for performing less-than complete tests have a tendency to leave major missing pieces.


Defense53: known-attack scanning -

Complexity: This class of detection methods is almost always used to identify a finite subset of an infinite class of attacks, and as such is only effective against commonly used copies of known attacks.


Defense54: accountability -

Complexity: Accountability can be quite complex and the issues are further muddied by legal limitations on some organizations.


Defense55: integrity shells -

Complexity: The objective of an integrity shell is to drive up the complexity of a corruptive attack by forcing the attacker to make a modification that retains an identical cryptographic checksum. The complexity of attack is then tied to the complexity of breaking the cryptographic system.


Defense56: fine-grained access control -

Complexity: In general, determining accessibility of data analyzed by Turing capable programs is an NP-complete problem, [Denning82] and in some cases it may be undecidable.


Defense57: change management -

Complexity: Proper change control demands that, in the production system, no programming capability be available. The verification of the propriety of changes is complex and, in general, may be comperable to proof of program correctness which is well known to be at least NP-complete.


Defense58: configuration management -

Complexity: Configuration management normally requires a tool to describe policy controls, a tool to translate policy into the methods available for protection, and a set of tools which implement those controls on each of the controlled machines. In some cases, policy may be incommensurable with implemented protection capabilities, in other cases, proper configuration may require a substantial amount of effort, and the process of changing from one control setting to the next may introduce unresolvable insecurities.


Defense59: lockouts -

Complexity: The major complexity in lockouts comes when they are used automatically. This leads to the possibility for enemy use reflexive control to cause denial of services. Analyzing this class of behaviors is quite complex.


Defense60: drop boxes and processors -

Complexity: No major complexity issues arrise in this context.


Defense61: authentication of packets -

Complexity: Complexity issues are typically peripheral to the concept of authentication of packets but the use of computational leverage is fundamental to the effectiveness of these techniques against malicious attackers.


Defense62: analysis of physical characteristics -

Complexity: No fundamental complexity limitations are inherent in the use of physical techniques, however, they may be quite difficult to use in networked environments over which physical processes can not easily be controlled.


Defense63: encrypted authentication -

Complexity: The basic issues in encrypted authentication are how to use encryption to improve the effectiveness of the process and and what encryption algorithm to use to attain the desired degree of effectiveness.


Defense64: tempest protection -

Complexity: Tempest protection is not highly complex, but requires substantial engineering effort.


Defense65: increased or enhanced perimiters -

Complexity: There may be substantial cost involved in increasing or enhancing perimiters and it may be hard to definitively determine when the perimiter is strong enough or wide enough.


Defense66: noise injection -

Complexity: Noise injection is fairly simple to do, but it may be hard to definitively determine when the noise level is high enough to meet the threat.


Defense67: jamming -

Complexity: The basic issue in jamming is related to the efficient use of limited energy. With enough energy, almost any signal can be overwhealmed, but the amount of energy required to overwhealm signals across a wide frequency band, it may take more energy than is easily available, and it is normally relatively easy to locate and destroy such a large noise generation system. The complexity then lies in determining the most effective use of resources for jamming.


Defense68: spread spectrum -

Complexity: The basic issues in spread spectrum are how to rapidly change frequencies and how to schedule frequency sequences so as to make jamming and listening difficult. The former is an electronics problem that has been largely solved even for low-cost units, while the latter is a cryptographic problem equivalent to other cryptographic problems.


Defense69: path diversity -

Complexity: The basic issues in path diversity are how to rapidly change paths and how to schedule path sequences so as to make jamming and listening difficult. The former is a logistics problem that has been solved in some cases, and the latter is a cryptographic problem equivalent to other cryptographic problems.


Defense70: quad-tri-multi-angulation -

Complexity: While Euclidean spatial location is fairly straight forward, additional mapping capability is required in order to use this technique in cyber-space, and the number of observation points and complexity of location has not been definitively published. This would appear to be closely related to mathematical results on finding cuts that partition a network.


Defense71: Faraday boxes -

Complexity: The only complexity involved in the design of Faraday cages comes from determining the frequency range to be protected and calculating the size of mesh required to eleiminate the relavent signals. Some care must be taken to assure that the shielding is properly in place and fully effective.


Defense72: detailed audit -

Complexity: Audits tend to be quite complex to perform and require that the auditor have knowledge and skills suitable to making these determinations.


Defense73: trunk access restriction -

Complexity: This appears to be of minimal complexity but may involve substantial cost or require special expertise.


Defense74: information flow controls -

Complexity: Flow control has been studied in some detail and there are some published results on the complexity of select aspects. [Denning75] [Denning76] [Cohen86] [Cohen85-2] [Biba77] [Bell73]


Defense75: disconnect maintenance access -

Complexity: This is a relatively simple matter except in large networks or locations where physical access is difficult, expensive, or impossible.


Defense76: effective protection mindset -

Complexity: Dealing with peolpe is always a complex business, but fostering a protection-oriented attitide is not usually difficult if management is supportive.


Defense77: physical switches or shields on equipment and devices -

Complexity: This is not a complex issue to address, but substantial costs may be involved if implemented through a retrofit.


Defense78: trusted repair teams -

Complexity: People-related issues dominate this protectiopn method.


Defense79: inventory control -

Complexity: Inventory control is not a very complex process to do well, but it requires a consistent level of effort and a systematic approach.


Defense80: secure distribution -

Complexity: Some aspects of secure distribution, such as the elimination of covert channels related to bandwidth utilization, can be quite complex - in some cases as complex as a cryptography in general, while other aspects are relatively easy to do at low cost.


Defense81: secure key management -

Complexity: Key management is one of the least understood and hardest problem areas in cryptography today, and has been the cause of many cryptosystem failures - perhaps the most widely publicized being the inadequate key management by Germany during World War II that led to rapid decoding of Enigma cyphers. Physical key management is equally daunting and has led to many lock and key design schemes. To date, as far as can be determined from the available literature, no foolproof key management scheme has been devised.


Defense82: locks -

Complexity: Lock technology has been evolving for thousands of years, and the ideas and realizations varry from the simple to the sublime. In general, locking mechanisms can be as complex as general informational and physical properties.


Defense83: secure or trusted channels -

Complexity: In general, trusted channels can be as complex as general informational and physical properties.


Defense84: limited function -

Complexity: It has been shown that systems with even a very limited set of functions can be used to implement general purpose systems. [Turing36] For example, nor gates can be used to create any digital electronic function, and, subject to tolerance requirements, general purpose physical machines also exist. [Cohen94-4]


Defense85: limited sharing -

Complexity: Effectively limiting sharing with other than purely physical means has proven to be a highly complex issue. For example, more than 20 years of effort has been put forth in the design of trusted systems to try to achieve this goal and it appears that another 20 years will be required before the goal is actually realized.


Defense86: limited transitivity -

Complexity: This is not a highly complex area, but there is a tendency for information to rapidly reach its limit of transitivity and there are restrictions on the granularity at which control can be done without producing high mathematical complexity on time and space used to track transitive flow. [Cohen84] [Cohen94-2]


Defense87: disable unsafe features -

Complexity: This is not difficult to to, but often the feature that is unsafe is used and this introduces a risk/benefit tradeoff. It is also common to find new vulnerabilities and if this policy is followed, it may result in numerous changes in how systems are used and thus create operational problems that make the use of many features infeasible.


Defense88: authenticated information -

Complexity: Redundancy required for authentication and the complexity of high assurance authentication combine to limit the effectiveness of this method, however, there is an increasing trend towards its use because it is relatively efficient and reasonably easy to do with limited assurance.


Defense89: integrity checking -

Complexity: In general, the integrity checking problem can be quite complex, however, there are many useful systems that are quite efficient and cost effective. There is no limit to the extent to which integrity can be checked and the question of how certain we are based on which checks we have done is open. As an apprently fundamental limitation, information used to differentiate between two otherwise equivalent things can only be verified by independent means.


Defense90: infrastructure-wide digging hotlines -

Complexity: This requires tracking all infrastructure elements and introduces a substantial risk that gathered and co-located infrastructure information will be exploited by attackers.


Defense91: conservative resource allocation -

Complexity: In general, the allocation problem is at least NP-complete. As a far more important limitation, most systems are designed to handle 90th to 99th percentile load conditions, but the cost of handling worst case load is normally not justified. In such systems, stress-induced failures are almost certain to be possible.


Defense92: fire supression equipment -

Complexity: This is off-the shelf equipment and its limitations and properties are well understood by fire departments worldwide.


Defense93: fire doors, fire walls, asbestos suits and similar fire-limiting items -

Complexity: This is off-the shelf equipment and its limitations and properties are well understood by fire departments worldwide.


Defense94: concealed services -

Complexity: This type of concealment is often referred to as security through obscurity because the effectiveness of the protection is based on a lack of knowledge by attackers. This is generally held to be quite weak except in special circumstances because it's hard to keep a secret about such things, because such things are often found by cleaver attackers, and because insiders who have the special knowledge are often the sources of attacks.


Defense95: traps -

Complexity: In general, no theory of traps has been devised, however, it is often easy to set traps and easy to evade traps, and, in legal cases, entrapment may void any attempt at prosecution.


Defense96: content checking -

Complexity: In a limited function system, content checking is limited by the ability to differentiate between correct and incorrect values within the valid input range, while in systems with unlimited function, most of the key things we commonly wish to verify about content are undecidable.


Defense97: trusted system technologies -

Complexity: The certification process introduces substantial complexities and delays, while the advantage of standardization comes from aneconomy of scale in not having to independently verify the already certified properties of a trusted system for each application.


Defense98: perception management -

Complexity: It is always tricky trying to fool the opponent into doing what you want them to do.


Defense99: deceptions -

Complexity: Deceptions are one of the most interesting areas of information protection but little has been done on the specifics of the complexity of carying out deceptions. Some work has been done on detecting imperfect deceptions.


Defense100: retaining confidentiality of security status information -

Complexity: Many refer to this practice as security through obscurity. There is tendency to use weaker protection techniques than are appropriate under the assumption that nobody will be able to figure them out. History shows this to be a poor assumption.


Defense101: regular review of protection measures -

Complexity: Regular review is simple to do, but the quality of the results depends heavily on the reviewers.


Defense102: independent computer and tool use by auditors -

Complexity: It can be quite difficult to attain a high degree of assurance against all hardware and software subversions, particularly in the cases of storage media and deliberately corrupted hardware devices.


Defense103: standbye equipment -

Complexity: This is common practice and is not difficult to do.


Defense104: protection of data used in system testing -

Complexity: This is approximately as complex as protecting live data except that test systems can often be physically isolated from the rest of the environment.


Defense105: Chinese walls -

Complexity: Chines walls can become quite complex if theya re being used to enforce fine-grained access controls and if they are expected to deal with time transitivity of information flow.


Defense106: tracking, correlation, and analysis of incident reporting and response information -

Complexity: This is not a complex thing to do, but it is rarely done well. In general, the analysis of information may be quite complex depending on what is to be derived from it.


Defense107: minimizing copies of sensitive information -

Complexity: There is a tradeoff between the advantage of retaining fewer copies to prevent leakage and having enough copies to assure availability.


Defense108: numbering and tracking all sensitive information -

Complexity: This is a bit expensive but is not complex.


Defense109: independent control of audit information -

Complexity: This is not complex to accomplish with a reasonable degree of assurance, however, under high load conditions, such mechanisms often fail.


Defense110: low building profile -

Complexity: This is not an inherently complex thing to do, but in many companies, computer rooms are showcases and this makes low profiles hard to maintain.


Defense111: minimize traffic in work areas -

Complexity: This is not a highly complex protective measure.


Defense112: place equipment and supplies out of harms way -

Complexity: This is not hard to do.


Defense113: universal use of badges -

Complexity: Badging technology can be quite complex depending on the control requirements the badges are intended to fulfill.


Defense114: control physical access -

Complexity: Although the complexity of this area is not extreme in the technical sense, there is a substantial body of knowledge on physical access control.


Defense115: separation of equipment so as to limit damage from local events -

Complexity: Separation can be quite a complex issue because emanations can travel over space and can be detected by a variety of means. Within a secured area, for example, Sandia is required to maintain appropriate separation between classified computing equipment and unclassified equipment, phone lines, etc. The required separations are: 6 inches between classified equipment and unclassified equipment, 6 inches between classified equipment and unclassified cables, and 2 inches between classified cables and unclassified cables. Cables include monitor/keyboard/printer cables, phone lines, etc., but NOT power cables. These separations are valid for classified systems up to and including SRD.


Defense116: inspection of incoming and outgoing materials -

Complexity: Because information can be encoded in an unlimited number of ways, it can be arbitrarily complex to determine whether inbound our outbound information is inappropriate. This is equivalent to the general computer viruse and Trojan horse detection problems.


Defense117: supression of incomplete or obsolete data -

Complexity: Tracking obsolecense is rarely done well, however, if tracking is properly done, supression is not complex.


Defense118: document and information control procedures -

Complexity: Non-repudiation is a fairly complex issue in information systems,typically involving complex cryptographic protocols. In general, it is complex an issue as cryptography in general.


Defense119: individual accountability for all assets -

Complexity: This is not complex to do, but may require substantial overhead.


Defense120: clear line of responsibility for protection -

Complexity: The challenges of organizational responsibility are substantial because they involve getting people to understand information protection at an appropriate level to their job functions..


Defense121: program change logs -

Complexity: Change control is often too expensive to be justified and is commonly underestimated. Sound change control is hard to attain and approximately doubles the software costs of a system. [Cohen94-2]


Defense122: protection of names of resources -

Complexity: Operations security is a complex field - in large part because of the effects of data aggregation.


Defense123: compliance with laws and regulations -

Complexity: The complexity of the legal system is legend.


Defense124: legal agreements -

Complexity: The complexity of the legal system is legend.


Defense125: time, location, function, and other similar access limitations -

Complexity: The only real complexity issue is in identifying the proper sets of controls over the conditions for access and non-access for each individual relateing to each system.


Defense126: multidisciplinary principle (GASSP) -

Complexity: Security is achieved by the combined efforts of data owners, custodians, and security personnel. Essential properties of security cannot be built-in and preserved without other disciplines such as configuration management and quality assurance. Decisions made with due consideration of all relevant viewpoints will be better decisions and receive better acceptance. If all perspectives are represented when employing the least privilege concept, the potential for accidental exclusion of a needed capability will be reduced. This principle also acknowledges that information systems are used for different purposes. Consequently, the principles will be interpreted over a wide range of potential implementations. Groups will have differing perspectives, differing requirements, and differing resources to be consulted and combined to produce an optimal level of security for their information systems.


Defense127: integration principle (GASSP) -

Complexity: The most effective safeguards are not recommended individually, but rather are considered as a component of an integrated system of controls. Using these strategies, an information security professional may prescribe preferred and alternative responses to each threat based on the protection needed or budget available. This model also allows the developer to attempt to place controls at the last point before the loss becomes unacceptable. Since developers will never have true closure on specification or testing, this model prompts the information security professional to provide layers of related safeguards for significant threats. Thus if one control is compromised, other controls provide a safety net to limit or prevent the loss. To be effective, controls should be applied universally. For example, if only visitors are required to wear badges, then a visitor could look like an employee simply by removing the badge.


Defense128: timeliness principle (GASSP) -

Complexity: Due to the interconnected and transborder nature of information systems and the potential for damage to systems to occur rapidly, organizations may need to act together swiftly to meet challenges to the security of information systems. In addition, international and many national bodies require organizations to respond in a timely manner to requests by individuals for corrections of privacy data. This principle recognizes the need for the public and private sectors to establish mechanisms and procedures for rapid and effective incident reporting, handling, and response. This principle also recognizes the need for information security principles to use current, certifiable threat and vulnerability information when making risk decisions, and current certifiable safeguard implementation and availability information when making risk reduction decisions. For example, an information system may also have a requirement for rapid and effective incident reporting, handling, and response. In an information system, this may take the form of time limits for reset and recovery after a failure or disaster. Each component of a continuity plan, continuity of operations plans, and disaster recovery plan should have timeliness as a criteria. These criteria should include provisions for the impact the event (e.g., disaster) may have on resource availability and the ability to respond in a timely manner.


Defense129: democracy principle (GASSP) -

Complexity: It is important that the security of information systems is compatible with the legitimate use and flow of data and information in the context of the host society. It is appropriate that the nature and amount of data that can be collected is balanced by the nature and amount of data that should be collected. It is also important that the accuracy of collected data is assured in accordance with the amount of damage that may occur due to its corruption. For example, individuals' privacy should be protected against the power of computer matching. Public and private information should be explicitly identified. Organization policy on monitoring information systems should be documented to limit organizational liability, to reduce potential for abuse, and to permit prosecution when abuse is detected. The monitoring of information and individuals should be performed within a system of internal controls to prevent abuse. Note: The authority for the following candidate principles has not been established by committee consensus, nor are they derived from the OECD principles. These principles are submitted for consideration as additional pervasive principles.


Defense130: internal control principle (GASSP) -

Complexity: This principle originated in the financial arena but has universal applicability. As an internal control system, information security organizations and safeguards should meet the standards applied to other internal control systems. "The internal control standards define the minimum level of quality acceptable for internal control systems in operation and constitute the criteria against which systems are to be evaluated. These internal control standards apply to all operations and administrative functions but are not intended to limit or interfere with duly granted authority related to development of legislation, rulemaking, or other discretionary policymaking in an organization or agency.


Defense131: adversary principle (GASSP) -

Complexity: Natural hazards may strike all susceptible assets. Adversaries will threaten systems according to their own objectives. Information security professionals, by anticipating the objectives of potential adversaries and defending against those objectives, will be more successful in preserving the integrity of information. It is also the basis for the practice of assuming that any system or interface that is not controlled is assumed to have been compromised.


Defense132: continuity principle (GASSP) -

Complexity: Organizations' needs for continuity may reflect legal, regulatory, or financial obligations of the organization, organizational goodwill, or obligations to customers, board of directors, and owners. Understanding the organization's continuity requirements will guide information security professionals in developing the information security response to business interruption or disaster. The objectives(4) of this principle are to ensure the continued operation of the organization, to minimize recovery time in response to business interruption or disaster, and to fulfill relevant requirements. The continuity principle may be applied in three basic concepts: organizational recovery, continuity of operations, and end user contingent operations. Organizational recovery is invoked whenever a primary operation site is no longer capable of sustaining operations. Continuity of operations is invoked when operations can continue at the primary site but must respond to less than desirable circumstances (such as resource limitations, environmental hazards, or hardware or software failures). End user contingent operations are invoked in both organizational recovery and continuity of operations.


Defense133: simplicity principle (GASSP) -

Complexity: Simple safeguards can be thoroughly understood and tested. Vulnerabilities can be more easily detected. Small, simple safeguards are easier to protect than large, complex ones. It is easier to gain user acceptance of a small, simple safeguard than a large, complex safeguard.


Defense134: periods processing and color changes -

Complexity: It is rather difficult to thoroughly and with certainty eliminate all cross talk between processing on the same hardware at different periods of time.


Defense135: alarms -

Complexity: The technology for detection is quite broad and there are many complexity issues that have not been thoroughly researched. [Cohen96-4]


Defense136: insurance -

Complexity: Insurance is a rather complex issue involving actuarial tables, decisions about whether to self-insure, and a heavy risk management aspect.


Defense137: choice of location -

Complexity: Statistical data is available on physical locations and can often be used to analyze risks with probabilistic risk assessment techniques.


Defense138: filtering devices -

Complexity: Filtering can be quite complex depending on what has to be filtered.


Defense139: environmental controls -

Complexity: These devices are commonplace and well-understood.


Defense140: searches and inspections -

Complexity: This activity is typified by manual labor augmented with various special-purpose devices.