Managing Network Security

Balancing Risk

by Fred Cohen



Series Introduction

Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.


The Tight rope:

Network security walks a tight rope, trying to balance risk with benefits while hoping not to take a big fall. We often slip and have to grab on quickly, responding to incidents as they occur, and working very hard to prevent the catastrophic failure. Up there on the rope, we are constantly balancing and re-balancing.

Now imagine that you were watching a tight rope walker walking across a rope with a long pole, with most of the weight off to one side. The walker would have to be leaning heavily to the other side to keep from being pulled off, and this would make the walk so much the more tenuous. That's what I see in network security today.

Everybody knows that it's easier to stay on the rope if the pole is nearly equally distributed between the walker's two sides. And it's the same with network security. If your protection scheme is properly balanced, it's much easier to keep safe. To give you some ideas about off-balance security, I'll use some examples.


Some Off-Balance Security Programs

Example 1: Cryptography vs. Host Security

Example 2: Physical Security vs. Information Security

Example 3: Technical Safeguards vs. Testing

Each of these situations might be appropriate in some obscure set of circumstances - particularly if very specific threat profiles are present to the exclusion of others - but this is almost never the case and certainly was not the case in any of the real-life situations these examples came from.

Getting the Balance Back

Balancing your information protection program is a risk management function in that imbalances are the result of either too little or too much protection for the level of threat in one or more of the areas where protection is applied.

In example 1, there is usually too little host protection and too much cryptographic protection for the threat. There are at least two good reasons for this; (1) increasing cryptographic protection is generally easy (it usually involved changing a parameter relating to key length), and (2) increasing host protection is usually hard (it usually involves controlling configurations). It is tempting to get as much as you can for as little as you can, and other than compatibility and performance issues, using more bits in your cryptography certainly seems reasonable. But depending on the threat, even this limited effort may be a waste of time. For example, it might be worth noting that even with perfect (i.e., unbreakable) cryptography in place, the reduction in risk from typical Internet attacks is negligible. The reason for this is that breaking cryptosystems is almost never the cause of actual attacks. Even when it is, it's almost never from breaking the cryptography itself, but rather from some flaw in the implementation that allows the programs that implement the cryptosystems to be attacked. The solution is to spend more time and effort on host security, perhaps at the expense of hat next increase in cryptographic complexity.

In example 2, we see that imbalance is the norm. This, in my experience stems from two factors; (1) the changing nature of the threat over time, and (2) the people making protection decisions. The changing nature of information protection relates to the fact that we only recently have become so networked that network security is a dominant factor in the success of defenses. Thus the historical emphasis and deep investment in physical security stems from the long time over which this has been appropriate. Similarly, the low investment in host security stems from the lack of a requirement when only a limited number of authorized users had access to hosts. Of course this was not true in all environments, but it is largely true. The people now in charge of information protection are also largely the legacy personnel who were tasked with physical security for such a long time. This means that we need to update our people (not replace them). Other people in charge of information protection are heavily oriented toward cyber threats and aren't as knowledgeable about physical security issues as they might be. They also need to be retrained. Over time, assuming proper analysis is done and results are use, this imbalance will change.

In example 3, we have an almost universal challenge to face. Very few organizations do adequate protection testing and as a result, the technical safeguards that they implement often fail to provide the coverage they are believed to provide. There are also several reasons for this, the most common being a lack of awareness that testing is necessary or appropriate and inadequate knowledge of how to do effective testing. These challenges can be met by improved understanding, but more often than not are met by simplistic methods like the use of security scanners and other similar efforts.

Underlying all of these examples is the fundamental notion of using risk management methods to make prudent decisions about relative levels of protection. Unfortunately, modern risk management techniques in information protection are inadequate to the task, and there are no clear cut quantitative measures that allow us to make prudent decisions about these tradeoffs. In fact, there isn't even a widely accepted list of what these tradeoffs are. It's hard to do risk management when you don't have the comprehensive list of options.

Perception of Risk vs. Risk

In the nuclear power industry, they have a big public relations problem. People think of nuclear power as being very risky, especially from the standpoint of enormous losses of life. It doesn't matter what the safety record of the nuclear industry is (it is in fact far better than any of the alternatives in terms of loss of life relative to energy generated), because it is not the actual risk that people act on, it is the perceived risk. The same problem is present in the information protection business.

In information protection, leaking of secrets is widely believed to be the greatest risk we face, and since cryptography is one way of protecting secrets, it seems obvious that better and better encryption is always worthwhile. Unfortunately, both the assumption and the conclusion are not supported by the facts. The assumption - that secrecy is the greatest risk - is wrong in most cases - and earlier articles in this series have discussed this at some length. The conclusions - that better encryption is always worthwhile - is wrong in many cases - even if the assumption were right - because even relatively trivial encryption makes the path of least resistance for most attackers divert from reading messages to subverting the operating environment.

The perception of risk associated with attack from over the Internet is very high, and while there are certainly threats in this area, statistics gathered year after year continue to show that insiders are involved in 80 percent of the losses known to be incurred from information-related crime. Thus we have Internet attack scanners in widespread use, but very little emphasis on internal protection from such techniques as configuration management and auditing. Again the perception of risk leads to poor judgment.

I don't want to enumerate all of these sorts of assumptions here, but I think the point is made. It is the perception of risk that drives risk management in most information protection programs, and it is the reality of risk that must be used in order to make better decisions.


So What Are The Real Risks?

Nobody knows! Having stuck my nose into the middle of this nasty subject, it's time to get it cut off. A very legitimate question is that posed in the title of this section. Unfortunately, my experience with risk management tells me that this is a very complicated issue and that nobody really knows the answer. Furthermore, the real risks change over time, so doing the job right involves predicting the future and investing for it - not something I can be very precise about.

If you think that looking at the past is a good way to predict the future, you're probably right, but not in the way you might think. Predicting the future based on the past is good in terms of things that repeat - such as human behavior. But in terms of technological trend specifics, it's a wild guess at best. We can reasonably predict that there will be technology fads - at least in the United States - largely because that has driven the market for a long time. We can also reasonably predict that executable content will increase, Web-based commerce will increase, and networking will become more endemic in the way we deal with information. No surprises here, nor should there be any. I cannot tell you whether the Linux or MacOS operating system will dominate over the next 20 years - but I'm pretty sure it won't be Windows - talk about going out on a limb!


How do We Rebalance?

Prudence lies somewhere between you can't get fired by buying IBM and I'm pretty sure it won't be Windows. Risk management has to do with placing your bets so you will not lose regardless of the real future. In some cases, that means not spending the money for prevention because you can't predict well enough, while in other cases, it means infrastructure protection investments for a 20 year time frame.

So the effective balancing act must take into account the range of potential futures, the time scales within which we have to adapt, and the budgetary constraints we live under. Then we have the threats both present and future, the vulnerabilities that we have to day and will likely have more of tomorrow, and of course the consequences of protection failures over time. It's getting complicated isn't it. So let's simplify.

I tend to take a simpler approach. Here it is:


Give Me An Example

Because it is so common today, we'll take a simple example from the cryptography vs. host security imbalance described earlier.

Suppose you are running a Web-based Internet commerce server. You advertise various items over the Web, take credit card information over the Web, process it through your back end computers, and deliver the goods and services you promise. Regardless of your total business volume over the Web, it has grown to be very important to your business.

Where's the balance between cryptography and host security? To quote an old commercial - it's in there.

In step 1, you looked at technical safeguards, which normally includes both cryptographic protection and host security. If either was not up to standard, the should now be brought up to a reasonable level of protection. In most cases, if you were already using SSL, it is unlikely that improvements to the cryptographic protocol would be suggested, but if host security allowed remote access by exploiting a known sendmail hole, that should have been identified as very weak and should have been addressed. Similarly, if there was no cryptography in use and the server had strong operating system protection in place, it is likely that cryptographic coverage would be among the mitigation strategies suggested.

If you used BS7799, you would have encountered questions like "Where are special protections required for electronic data interchange and why? What protections are in place to protect those interchanges and how were they determined?" and "How do you make certain that the person at the other end of a dial-in or network-based access is who they claim to be? How do you make certain that once a connection is established, the user on the other end or the connection in between doesn't change? What is the basis for asserting that this level of assurance is adequate to the business need for protection?"

If you used the GASSP standard, you would have had to address this criteria: "Security levels, costs, measures, practices, and procedures should be appropriate and proportionate to the value of and degree of reliance on the information systems and to the severity, probability, and extent of the potential for direct and indirect harm."

In step 2, you might have decided that there were particularly high consequences associated with alteration of prices specified on your Web server. In this case, you would probably have decided that host security was far more important to mitigation than cryptographic protection of information in transit (you may have decided to use cryptographic checksums of course) and you would have proceeded to try to lean toward the fulfillment of the requirement.

In step 3, which is ongoing indefinitely, you continue to update host security and cryptographic protection to meet the need as vulnerabilities and threats change over time. Balance is achieved by improving what is weak.


Are We Done Yet?

Not quite. We have discussed the notion of balance in a security program and given some examples of typical imbalances and methods for re-balancing. But I would be remiss if I didn't mention the fact that large systems tend to have large momentum. Re-balancing may take a long time, and if it does, this means that we will never be able to stay on the tight rope. The most important thing to do in order to get and keep your protection balance is to find ways to become nimble in your information protection program. But that discussion is for another day.


About The Author:

Fred Cohen is a Principal Member of Technical Staff at Sandia National Laboratories and a Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at all.net or visiting /