Leading Attackers Through Attack Graphs with Deceptions

by Fred Cohen+ and Deanna Koike*

May 29, 2002

* Sandia National Laboratories - College Cyber Defenders Program

+ Principle Member of Technical Staff - Sandia National Laboratories

Abstract

This paper describes a series of experiments in which specific deceptions were created in order to induce red teams attacking computer networks to attack network elements in sequence. It demonstrates the ability to control the path of an attacker through the use of deceptions and allows us to associate metrics with paths and their traversal.


Background and Introduction

A fairly complete review of the history of deception in this context was recently undertaken, and the reader is referred to it [1] for more details on the background of this area. Experimental results were also recently published and the reader is referred to another recent paper for further details of that effort. [2]

One of the key elements in associating metrics with experimental outcomes in our previous papers was the use of attack graphs and time to show differences between attackers acting in the presence and absence of deceptions. After running a substantial number of these experiments we were able to show that deception is effective, but little more was explored about the nature of the attack processes and how they are impacted by specific deceptions. One of the things we noticed in these experiments was that patterns seemed to arise in the paths through attacks. While this has long been described in literature that seeks to associate metrics for the design of layered defenses, and in the physical reality it has long been used to drive prey into kill zones, to date we have not seen examples of the design of such defenses so as to lead attackers into desired paths in the information arena.

Our ongoing theoretical work led us to the notion that in addition to measuring paths through attack graphs over time, we should also be able to design attack graphs so that they would be explored in a particular sequence. By inducing exploration sequences, we should then be able to drive the attackers into desired systems and content within those systems. Indeed, if we become good enough at this, we might be able to hold attackers off for specified time periods by specific techniques, change tactics automatically as attackers explore the space, so as to continue to drive them away from actual targets, and otherwise exploit the knowledge for both deception and counterdeception.

In this paper, we describe a set of experiments in which we used a generic attack graph and specific available techniques to design sets of deceptions and system configurations designed to lead attackers through desired paths in our attack graph.


The Attack Graph

Based on previous work already cited, we developed the following generic attack graph which is intended to describe, at a specific level of granularity, the processes an attacker might use in attacking a computer system:

The process begins at 'Start' and is divided into a set of 'levels' which we can number as -4 through 4 inclusive. The attacker starts at level 0 and generally moves toward increasingly negative numerical values as they are taken into a deception and increasingly higher numerical values as they succeed at attacking real victims. Lines with arrows represent transitions and each node in the graph represents a complex process which we have not yet fully come to understand. There are a lot of transitions that cross multiple levels of the graph. For example, an attacker in a real system can be led into a deception by 'tripping across' a deception within that system that deflects the attack into a deception. In addition, there is a general 'warp' that extends throughout the graph in the sense that from any given state, it is possible to leap directly to another state, however this appears to be fairly low probability and has not been well characterized yet.

Two processes are defined here, one starting with a systematic exploration of the target space and the other through random guessing. We have sought out other strategies to depict, but have found none. It appears that transitions in this attack graph are associated with cognitive processes in the groups, individuals, and systems used in the attack process as they observe, orient, decide, and act on signals they get from their environment.

Our Experimental Design

Early in 2002, we created a series of experiments in which we attempted to design sets of interacting deception-based defenses with the objective of inducing attackers to follow specific paths through the generic attack graph. For example, in our first experiment, we decided to try to induce attackers to (1) seek targets, (2) fail to find real targets, (3) find false targets, (4) attempt to differentiate false targets from real ones, (5) seek other targets, (6) find false targets, (7) differentiate them from other false targets, (8) decide to seek vulnerabilities, (9) try to enter, (10) fail to find vulnerabilities, (11) fail to enter, (12) eventually succeed in gaining limited entry, (13) attempt to exploit access, (14) decide to try to expand access, and (15) continue the process over a period of 4 hours. We will use these numbers in the following paragraphs to associate our mechanisms with the actions we sought to induce.

Our planning process consisted of creating sets of possible targets of attack with characteristics that could be identified and differentiated with different levels of effort using available tools and known techniques. This process was driven by the 'assignment' of the team (1) which was to find user systems and try to gain specific information about a criminal conspiracy from those systems. By making the more easily identified targets more obviously false, we were able to induce the behaviors associated with the loop in which attackers (3) find false targets, (4) differentiate them as false, (5) and seek other targets. Similarly, we used (2) concealment techniques to make it difficult to find real targets so that the attackers would be far more likely to miss them and find false targets.

To get attackers to proceed to seek vulnerabilities and try to gain entry, (6) we created real systems that were apparently in use based on normal observations. For example, (7) these systems appeared to generate traffic that would commonly be associated with users performing activity, (8) they apparently had services running on them, they appeared to respond to various probes, and so forth. The goal was for the attackers to become adequately convinced that they were legitimate targets to (9) try to gain entry. After (11) some number of failed entry attempts, (12) relatively simple entry paths were found that allowed rapid entry through apparent misconfigurations, and (13) select content implying the need for more access to get to more important content was placed in those computers to (14) entice the attackers to try to escalate privileges under the belief that this might gain them the information they sought. Some of the information that could only be obtained under escalated privileges made it very clear that this system was not the real target, thus driving the attacker back to the target acquisition phase. In addition, IP addresses were changed every few minutes and user access was terminated periodically to cause the attacker to return to the target acquisition process and attempted entry process respectively. It was anticipated that over time, these targets would be identified as false and that other targets would be sought. (15) Other less obvious targets were provided in a similar vein for more in-depth examination. Specific methods associated with these processes are described in a companion paper still in draft form. [3] We also note that the deceptions in these experiments are fully automatic and largely static in that the same input sequence from the attacker triggers the same response mechanism in the deception system throughout the experiment.

In the first experiment, the systems being defended were on the same network as the attackers and were configured to ignore packets emitted from unauthorized IP addresses. Forged responses to ARP requests were used on all IP addresses not otherwise in use (2) to prevent ARP information from revealing real targets and ICMP responses were suppressed to prevent their use for identification of real targets.

Subsequent experiments were carried out with variations on these design principles. Specifically, we created situations in which we controlled available information so as to limit the decision processes of attackers. When we wished to hide things, we made them look like the rest of the seemingly all false environment, and when we wished to reveal things, we made them stand out by making them differentiable in different ways.

Unfortunately, we did not have the resources necessary to carry out a full fledged study in which we used the presence and absence of deception or more and less well planned deceptions in order to differentiate specific effects and associate statistically meaningful metrics with our outcomes. We did not even strictly speaking have the resources for creating repeatable experiments. Unlike our earlier experiments [2] in which we ran 5 rounds of each experiment with deception enabled and disabled, we had only one in-house group of attackers available to us, and of course they are tainted by each experience.

As an alternative, we created a series of experiments in which our in-house attack team was guided, unbeknownst to them, and with increasing accuracy, through a planned attack graph. We then carried out an experiment at a conference in which attack groups were solicited to win prizes (up to $10,000) for defeating defenses. The specific deception defenses were intended to induce the attackers to take a particular path through the attack graph. All attack groups acted simultaneously and in competition with each other to try to win prizes by breaking into systems and attaining various goals. No repetitions were possible, and a trained observer who knew what was real and what was deception followed the attacker activities and measured their progress.

Experimental Methodology:

In each case the experiment began with a planning session in which defense team members designed a set of specific deceptions and predicted sequences of steps in the attack graph that they believed attackers would take in attempting to attack real targets. The configuration was documented and implemented and the attack sequences were discussed and put into written form as a series of states and transitions in the attack graph depicted. Numbers were associated with attack graph locations for convenience of abbreviation. These locations in the attack graph can also be roughly associated with the levels used in our previous experiments on deception. The numerical values are as follows:

Number Node name Level Number Node name Level
0 Start 0 1 Seek Target 0
2 Fail to find false target -1 3 Find false target -1
4 Differentiate (Fake) -1 5 Think Fake (from 4) 0
6 Think Real (Fake) -1 7 Seek Vulnerabilities (Fake) -2
8 Try to enter (Fake) -3 9 Exploit Access (Fake) -4
10 Expand Access (Fake) -4 11 Find Real Target 1
12 Differentiate (Real) 1 13 Fail to find false target (Real) 1
14 Don't Know 0 15 Think Real (Real) 1
16 Seek Vulnerability (Real) 2 17 Try to Enter (Real) 3
18 Think Fake (Real) 0 20 Exploit Access (Real) 4
21 Expand Access (Real) 4 30 Select Arbitrary Target 0
31 No Target 0 32 False Target -1
33 Try Arbitrary Exploit -2 34 Real Target 1
35 Try Arbitrary Exploit 2
Attack graph numbering

A predicted outcome would be in the form of sequences of node numbers with a note on transitions and loops indicating the anticipated event. For example the first run starts like this:

Sequence Comment
0 Start the run
1 Seek target per assignment
2 Fail to find target (missed topology due to concealment)
3 Find false target via open ports
4,5,1 Obvious dazzlements
1,3,4,6 Limited dazzlements easily differentiated
Example Predictions

The experiment was run with one of the defense team members taking notes on the sequence of events in terms of the attack graphs, identifying associated times. It was necessary for this team member to know specifically which targets were deceptions and which were real in order to accurately identify the location in the attack graph. With the combination of knowledge of the attack graph, the configuration, and background on deception, it is relatively easy to guess what paths are likely to occur under which deceptions. For this reason it was impossible to have the observer not know what predictions were made. Observers were trained in not revealing information about the situation to the attackers, however, this is a less than ideal situation. This represents a yet unresolved experimental limitation that can easily produce erroneous results because of the lack of an unbiased observer. Note that the model implicitly assumes that at any time an attacker can revert to a previous state and that there is a low probability that an arbitrary state transition (a warp) can occur at any time from any location to another. Attack sequence prediction implicitly assumes this sort of backtracking is always possible and it is not noted unless it is specifically driven as part of the experiment.

To help compensate for this, we introduced two additional controls. During experiments, we videotape the process so that it can be independently reviewed. After the sequence of experiments, we review results with those who participated and ask them for their views of whether our depictions were accurate.

Experiment 1:

In experiment 1, the following prediction was made and documented prior to the start of the experiment:

Sequence Comment
0 Start the run
1,2,1 Seek target per assignment, fail to find, return to seek
1,3,4,5,1 Find false target via open ports, obvious dazzlements, search on
1,3,4,6 Find false target, limited dazzlements easily differentiated
6,7 or 6,8,7 Obvious things to try don't work
7,8 loop Apparent vulnerability - weak services - some not vulnerable
7,8,9 Locatable vulnerability gives user access with obvious content
9,10 Obvious content not relevant - less obvious apparent but requires privilege
9,8 or 10,8 Internal kill response mechanism kicks user out
6-10,1 or 3-4,1 Rotating IP addresses force search to restart
Experiment 1 - Predictions

Experiment 1 proceeded on 2002-04-12 using a team of 7 university students specializing in computer science and part of the Sandia College Cyber Defenders program at Sandia National Laboratories. These students all have high academic credentials, range from Sophomore to Graduate students, and have limited experience in computer attack but substantial training and experience in defending computers against attacks. They all had several weeks of previous experience with similar deception defense techniques, practice with available tools, and experience in the experimental environment.

The following measured results were observed (all times relative to experiment start time):

Time Sequence Comment
0 Configure network
0:01 1 Passive network sniffing
0:04 1,2 1,13 Designers forget 13 is present when seeking targets
0:40 1,3,4,6 Probe with broadcast ping > consistent ARPs lead to deception box
0:45 6,8 Believe it is a RedHat Linux box, try simple entry, give up too soon
0:46 8,1 Rotating IP addresses force search to restart
0:46 1,3,4.6 Find apparent real (False) target and differentiate rapidly
0:46 6.8.7 Try obvious remote root password, fail, seek vulnerabilities
0:50 7,5,4 Express that this could be a fake box - continue plan
1:01 4,7 See IP addresses changing, associate ssh service
1:20 7,1,2 After group discussion, decide to try other search methods, fail
1:30 8 Using previous results, try to access false target
1:43 8,7 Try other services
1:44 7,8 loop Try various guesses, seek exploits
1:46 8,9 Guess valid password, gain access, see simple false content, read, believe need an exploit to escalate privileges
1:50 9,8 Internal kill response mechanism kicks user out
1:53 8,9 Regain access, identify system more closely, seek exploit
1:55 9,1 Rotating IP addresses force search to restart
1:55 1,3,4.6 Find apparent real (False) target and differentiate rapidly
2:05 6,7 Convinced they need to find ways to escalate privileges, meet to discuss
2:15 1,3,4,6 loop Identify pattern of IP address changes for prediction of next IP address
2:30 6,7 loop Seeking remote root attacks on fake box
2:40 6,7 loop Notice password file (considered but did not try to crack it)
3:10 7,8 Run known remote attack, failed to work
3:15 END Terminated for end of allotted time
Experiment 1 - Observed Behaviors

By comparing sequences we can readily see that the predicted sequences occur frequently and that there are no radical departures from the paths laid out in the prediction. The following summary of predicted and non-predicted sequences clarifies this comparison:

Predicted Observed Comment
0 yes Obvious
1,2,1 1,2,1,13 Designers forgot 13 would be present - otherwise correct
1,3,4,5,1 no Attackers were so caught up in 1,3,4,6 that they never returned
1,3,4,6 1,3,4,6 at 0:46, 1:55, 2:15
6,7 or 6,8,7 6,8 6,8,7, 6,7 at 0:45 0:46, 2:05, 2:30 (loop)
7,8 loop 7,8 7,8 loop at 1:44, 3:10
7,8,9 8,9 after 7,8 loops at 1:46 (missed 8,9 loop implied below)
9,10 no never reached
9,8 or 10,8 9,8 at 1:50
6-10,1 or 3-4,1 9,1 7,1 8,1 at 0:46, 1:20, 1:55, 2:15
------------ ------- -------------------------------------------
1,13 Designers forgot to indicate real targets would be missed
8,1 Implicit in all graphs
7,5,4,7 Never anticipated this path (0:50-1:01)
2,8 Use of previous results for 'direct' jump - part of other sequence
Experiment 1 - Results

The design seems to have worked as intended, driving attackers through specific sequences of attack methods and patterns of attack. For example, there were no instances of unanticipated motions from deception to real targets, no cases in which the attackers found real targets instead of deceptions, and despite the understanding of the potential for deception by the attackers, there were no strong efforts to seek out new methods to detect other systems as long as the mysteries of the already identified systems were still being unraveled. The paths described by the attack graph were followed as if they were well worn grooves in the attackers' methods. We also note that the deception was highly effective in that the attackers never moved toward the positive 'levels' of the attack graph.

Experiment 2:

In experiment 2, a more complex scenario was presented involving three networks. The attackers could move from the more distant network to an apparently more closely located network, to an inside network with the provision that once they had moved inward, they would be considered as having given up at the more distant location. There is not a lot of impetus to remain on the outside in this experiment. This then translates into three somewhat different but interrelated experiments. Each of the three experimental situations was predicted:

Sequence Comment
0 Start - from the outside only active searches will operate
1,2 loop Searches will fail to find real targets
1,13 loop Searches will often fail to find false targets
1,3,4,5 loop Some targets will be declared deceptions
1,3,4,6 A lot of seemingly different false targets will be found, some explored
6,7 loop Attempted remote exploitation may be tried - unlikely to work
6,8 loop Attempted direct entry may be tried very briefly (guest, guest works on some fakes)
8,9 If they gain entry, they will see obvious content
*,1 Lots of returns to 1 because of IP rotation mechanisms
Likely to move to DMZ or Inside soon
Experiment 2 - "Outside" Predictions
Sequence Comment
0 Start - from DMZ passive observation will show content
1,2 loop Often fail to find real targets
1,13 loop Often fail to find false targets
1,3,4,5 loop Some targets will be declared deceptions, much traffic will be dismissed
1,3,4,6 A lot of seemingly different false targets will be found, some explored
6,7 loop Attempted remote exploitation may be tried - unlikely to work
6,8 loop Attempted direct entry will be tried (guest, guest works on some fakes)
7,8 loop Try to find other vulnerabilities and exploit them
8,9 If they gain entry, they will see obvious content
9.10 If they gain entry, they may try to autoguess the root password
8,10 Slim chance they will exploit access to stop IP rotations, defeat deception
*,1 Lots of returns to 1 because of IP rotation mechanisms
Likely to move to Inside soon, perhaps correlate results
Experiment 2 - "DMZ" Predictions
Sequence Comment
0 Start
1,2 loop Often fail to find real targets
1,13 loop Often fail to find false targets
1,3,4,5 loop Some targets will be declared deceptions, much traffic will be dismissed
1,3,4,6 A lot of seemingly different false targets will be found, some explored
6,7 loop Attempted remote exploitation may be tried - unlikely to work
6,8 loop Attempted direct entry will be tried (guest, guest works on some fakes)
7,8 loop Try to find other vulnerabilities and exploit them
8,9 If they gain entry, they will see obvious content
9.10 If they gain entry, they may try to autoguess the root password
8,10 Slim chance they will exploit access to stop IP rotations, defeat deception
1,11,12,18 Some real target information may be found and dismissed
1,11,12,15 Some real target information may be found and thought real
15,16 loop Real targets may be scanned to find possible entry points
15,17 Simple direct entry attempts may be made, denial of service attempts may be made
16,17 Scanned services will yield complex bypass mechanisms, may be bypassed
15,20 Sniffed content may be accumulated to achieve a goal
30,34,35 loop Denial of service attempts against the network in general may be tried
30,31,32,33 loop Denial of service attempts against the network in general may be tried
*,1 Lots of returns to 1 because of IP rotation mechanisms and real target concealment
Experiment 2 - "Inside" Predictions

Experiment 2 used the same attackers as from experiment 1, but they were required to split into two teams and work in parallel in the same room. The following measured results were observed:

Time Team 1 Seq Team 2 Seq Comment
0 0 0 Start - Configure networks
0:24 1,2 1,13 loop Ping sweep of network, scripts
1,2,4 Scripts seem to fail - verify tools by testing
0:30 1,2 1,13 loop Arpwatch and ethereal
0:36 1,2 loop 1,2 loop arping, nmap - yield no data
0:49 1,3,4,5 traffic from other group seen and reconciled
1:03 1,3,4,6 1,3,4,6 observe deception traffic and examine
1:15 ==> DMZ team 2 decides to move to DMZ network
1:20 1,3,4,5 dazzled responses to ethereal, arping, ping, tcpdump
1,3,4,6 testing det select returns (traffic actually from team 1)
1:27 ==> DMZ team 1 decides to move to DMZ network
1:27 confusion traffic ceases (other team moved) and confusion occurs
1:30 1,3,4,5 active probing sees fake traffic - actually themselves
1:34 1,3,4,5 seeing lots of content in ethereal - result of team 2's scans
1:39 1,3,4,5 loop 1,3,4,5 loop dazzlement of each by themselves and others
1:42 30,31,33 try arping flood - no reason - no result
1:49 1,3,4,5 loop 1,3,4,5 loop "nmap useless"
2:00 ==> Inside team 1 decides to move to inside network
2:14 1,2 loop confusion by not getting 'fakes' anymore (team 1 gone)
2:29 1,11,12,15 observe real traffic - unsure of what it is
2:47 1,3,4,5 loop found self as only system in network
2:47 ==> Inside team 2 decides to move to Inside network
2:51 1,3,4,6,8,5,1 try to ssh to every IP, mirror ssh to self, return to start
2:56 1,3,4,5 confirmed mirroring behavior, noticed strange packet type
3:05 30,31,33 30,31,33 both teams created 'mirrors' and are mirroring each other into oblivion
3:22 1,3,4,5 loop 1,3,4,5 loop groups seeing different things - starting to talk more, confused
3:45 1,3,4,5 loop hint provides a step forward - into other deceptions
3:50 1,11,12,15 trying DoS against correct network
3:57 1 DoS only partly worked, return to retry
4:00 END
Experiment 2 - Observed Behaviors

It appears that time pressure prevented many of the potential predicted paths from being explored in this example. The exercise was just too complex for the time available. While we don't yet have a good model for the time associated with detecting and defeating various deceptions, it seems clear that the time factor played a major role in this exercise.

Predicted Observed Comment
0 OUTSIDE yes Obvious OUTSIDE
1,2 loop 1,2 loop 0:24, 0:30, 0:36
1,13 loop 1,13 loop 0:24
1,2,4 0:24 - no data led to doubt results
1,3,4,5 loop 1,3,4,5 0:49
1,3,4,6 1,3,4,6 1:03
6,7 loop no never got to it
6,8 loop no never got to it
8,9 no never got to it
*,1 yes all the time
0 DMZ yes Obvious DMZ 1:15, 1:27
1,2 loop 1,2 loop 2:14
1,13 loop no never got anywhere anything real
1,3,4,5 loop 1,3,4,5 1:20, 1:30, 1:34, 1:39, 1:49, 2:47
1,3,4,6 1,3,4,6 1:20
30.31.33 1:42 arbitrary action with no direction or effect - blowing off steam
6,7 loop no never got to it
6,8 loop no never got to it
7,8 loop no never got to it
8,9 no never got to it
9.10 no never got to it
8,10 no never got to it
*,1 yes all the time
0 INSIDE Obvious INSIDE 2:00, 2:47
1,2 loop no never got to it
1,13 loop no never got to it
1,3,4,5 loop 1,3,4,5 loop 2:56, 3:22, 3:45
1,3,4,6 1,3,4,6 2:51
6,7 loop no never got to it
6,8 loop 6,8,5,1 2:51
7,8 loop no never got to it
8,9 no never got to it
9.10 no never got to it
8,10 no never got to it
1,11,12,18 never got to it
1,11,12,15 1,11,12,15 2:29
15,16 loop no never got to it
15,17 no never got to it
16,17 no never got to it
15,20 no never got to it
30,34,35 loop no never got to it
30,31,33 loop 30,31,33 3:05
*,1 yes all the time
Experiment 2 - Results

Experiment 3:

In experiment 3, we repeated experiment 2 under somewhat different conditions. In this case, 9 hours were provided for the attackers. Attack groups included volunteers at a conference who were attending classes in attacking computer systems and participants in a contest wherein they could win thousands of dollars in prizes for defeating the defenses. We used the same predictions for this experiment as for experiment 2.

The following behaviors were observed over a 9 hour period of attempted entry. Times were not accurately kept because the situation was less amenable to close observation.

Time Sequence Comment
0 Configure network - start in Outside network
0:10 1,3,4,5 loop Thought they found computers but were confused
1,3,4,6,7 loop Occasionally thought something was real but then not
1,3,4 loop Thought equipment might be bad.
1,2 loop Never found many real targets
1,13 loop Never found several false targets
5:30 ==> DMZ All decide to move to DMZ network
1,2 loop Never found real targets
1,13 loop Never found several false targets
1,3,4,5 loop
1,13,1 loop
1,11,12,18 loop
1,11,12,15,16,18 loop
1,11,12,15,16,8 loop
30,31,33 Frustration led to random attempts at exploits
9:00 END OF TIME
Experiment 3 - Observed Behaviors

In this experiment, it seems clear that the attackers were less able to make progress. This appears to have a great deal to do with the level of experience of the attackers against the defenses in place. Despite having more than twice the available time, the attackers were unable to penetrate many of the deceptions at all, and were unable to succeed even against simple targets. It took nearly 8.5 hours before attackers started taking detailed notes of all the things they saw in order to try to correlate their observations. By comparison, students in previous experiments who had been trained in red teaming against deceptions in earlier efforts started taking notes immediately.

Predicted Observed Comment
0 OUTSIDE yes Obvious OUTSIDE
1,2 loop 1,2 loop
1,13 loop 1,13 loop
1,3,4,5 loop 1,3,4,5 loop
1,3,4,6 1,3,4,6
6,7 loop 6,7 loop
6,8 loop no never got to it
8,9 no never got to it
*,1 yes all the time
0 DMZ yes Obvious DMZ
1,2 loop 1,2 loop most of the time
1,13 loop 1,13 loop several of them
1,3,4,5 loop 1,3,4,5 loop Much of the time
1,3,4,6 no never got to it
30.31.33 Frustration led to random attempts at exploits
6,7 loop no never got to it
6,8 loop no never got to it
7,8 loop no never got to it
8,9 no never got to it
9.10 no never got to it
8,10 no never got to it
*,1 yes all the time
1,11,12,18 loop Unanticipated, but within the attack graph
1,11,12,15,16,18 loop Unanticipated, but within the attack graph
1,11,12,15,16,8 loop Unanticipated, but within the attack graph
Experiment 3 - Results

The only unpredicted behavior was the movement toward attempts at random exploits (i.e., 30.31.33). It appears that this results from frustration in other areas. This is particularly important because we had anticipated that such things could happen but did not understand the circumstances under which it might happen. We now believe that we have a better basis for understanding this and that we will be able to specifically generate conditions that induce or prevent this behavior.

Summary, Conclusions, and Further Work

It appears, based on this limited set of experiments, that in cases wherein attackers are guided by specific goals, the methods we identified in this and previous papers can be used to intentionally guide those attackers through desired paths in an attack graph. Specifically, the combination of directed objectives with the induction and suppression of signals that are interpreted by computer and human group cognitive systems leads to the ability to induce specific errors in the group cognitive system leading to guided movement through an attack graph.

The ability to guide groups of human attackers and their tools through deception portions of attack graphs and keep them away from their intended targets appears to provide a new capability for the defense of computer systems and networks. This method appears to operate successfully for periods of 4-9 hours against skilled human attack groups with experience in attack and defense and access to high quality tools and may operate for far longer periods. The number of experiments of this sort is clearly limited to the point where meaningful statistical data cannot be gleaned and further experimental studies are called for to further refine these results.

One area of particular interest is the ability of deceptions of this sort to operate successfully over extended periods of time. It appears that these defenses can operate successfully over time, but it also seems clear that with ongoing effort, eventually an attacker will come across a real system and penetrate it unless these defenses lead to adaptation of the defensive scheme. We foresee a need to generate additional metrics of time and information theoretic results to understand how long such deceptions can realistically be depended upon and to what extent they will remain effective over time in both static and adaptive situations.

References:



[1] Fred Cohen, Dave Lambert, Charles Preston, Nina Berry, Corbin Stewart, and Eric Thomas, "A Framework for Deception", 2001.

[2] Fred Cohen, Irwin Marin, Jeanne Sappington, Corbin Stewart, and Eric Thomas, "Red Teaming Experiments with Deception Technologies", 2001.

[3] Fred Cohen, Deanna Koike, "Errors in the Perceptions of Computer-Related Information", 2002.