Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
Network security walks a tight rope, trying to balance risk with benefits while hoping not to take a big fall. We often slip and have to grab on quickly, responding to incidents as they occur, and working very hard to prevent the catastrophic failure. Up there on the rope, we are constantly balancing and re-balancing.
Now imagine that you were watching a tight rope walker walking across a rope with a long pole, with most of the weight off to one side. The walker would have to be leaning heavily to the other side to keep from being pulled off, and this would make the walk so much the more tenuous. That's what I see in network security today.
Everybody knows that it's easier to stay on the rope if the pole is nearly equally distributed between the walker's two sides. And it's the same with network security. If your protection scheme is properly balanced, it's much easier to keep safe. To give you some ideas about off-balance security, I'll use some examples.
Example 1: Cryptography vs. Host Security
One of the most common discussions I observe is the seemingly eternal debate over cryptography. A sub-debate takes place over whether the Data Encryption Standard (DES) is adequate for some purpose (it's usually not today - but that's another issue). This debate centers around the question of how many days of effort by sufficiently skilled and resourced cryptanalysts is required in order to do something bad to a system using DES for protection of some function.
Now this is all fine and interesting discussion, but I almost never hear anyone who is making a decision about using one technology or another mention that the system they are using cryptography to protect has scores of other technical vulnerabilities that can be remotely exploited regardless of the cryptographic system in place and that can easily be attacked in a matter of seconds with off-the-Internet attack tools.
Example 2: Physical Security vs. Information Security
This balance seems to be particularly difficult to strike, but it's hard for me to understand why. I have seen organizations that engage in heated debate over whether they should add a third layer of firewall to their network when their facilities are essentially publicly accessible. In my consulting career, I have walked into buildings, found my way to floors containing financial details, and gained access to all the information I needed to steal millions of dollars, while the corporations were debating issues related to whether to allow users to add one more protocol to their firewall operations.
On the other end of the spectrum, I have seen organizations where physical security is highly sophisticated and multi-layered, to the point where gaining physical access to a computer system almost certainly means rapid capture by armed guards. In those same organizations, I commonly see insiders loading executable programs from sites on the Internet where they have no idea of the integrity of the software they are down-loading. They place this software in their systems and run it without further protection, and they have no safeguards against exploitation though this method.
Example 3: Technical Safeguards vs. Testing
This balance is almost always one-sided - away from testing and toward untested technical safeguards. While I appreciate that testing may not sound very appealing, according to most developers whose writing on the subject I have read, testing is about 1/2 of the effort involved in writing software.
I have personally tested scores of systems as part of the audit process, and I can tell you that almost every system I have audited has had gaping holes that were easily revealed on even simplistic testing. In almost every case, the people who implemented these systems indicated that the thing I was trying would fail, and after I demonstrated the vulnerability by testing, they were able to fix it relatively quickly. If they had only taken the time to do simple tests, they would have known that their technical safeguards were ineffective and been able to make them more effective, but instead, they almost universally add more safeguards.
Each of these situations might be appropriate in some obscure set of circumstances - particularly if very specific threat profiles are present to the exclusion of others - but this is almost never the case and certainly was not the case in any of the real-life situations these examples came from.
Balancing your information protection program is a risk management function in that imbalances are the result of either too little or too much protection for the level of threat in one or more of the areas where protection is applied.
In example 1, there is usually too little host protection and too much cryptographic protection for the threat. There are at least two good reasons for this; (1) increasing cryptographic protection is generally easy (it usually involved changing a parameter relating to key length), and (2) increasing host protection is usually hard (it usually involves controlling configurations). It is tempting to get as much as you can for as little as you can, and other than compatibility and performance issues, using more bits in your cryptography certainly seems reasonable. But depending on the threat, even this limited effort may be a waste of time. For example, it might be worth noting that even with perfect (i.e., unbreakable) cryptography in place, the reduction in risk from typical Internet attacks is negligible. The reason for this is that breaking cryptosystems is almost never the cause of actual attacks. Even when it is, it's almost never from breaking the cryptography itself, but rather from some flaw in the implementation that allows the programs that implement the cryptosystems to be attacked. The solution is to spend more time and effort on host security, perhaps at the expense of hat next increase in cryptographic complexity.
In example 2, we see that imbalance is the norm. This, in my experience stems from two factors; (1) the changing nature of the threat over time, and (2) the people making protection decisions. The changing nature of information protection relates to the fact that we only recently have become so networked that network security is a dominant factor in the success of defenses. Thus the historical emphasis and deep investment in physical security stems from the long time over which this has been appropriate. Similarly, the low investment in host security stems from the lack of a requirement when only a limited number of authorized users had access to hosts. Of course this was not true in all environments, but it is largely true. The people now in charge of information protection are also largely the legacy personnel who were tasked with physical security for such a long time. This means that we need to update our people (not replace them). Other people in charge of information protection are heavily oriented toward cyber threats and aren't as knowledgeable about physical security issues as they might be. They also need to be retrained. Over time, assuming proper analysis is done and results are use, this imbalance will change.
In example 3, we have an almost universal challenge to face. Very few organizations do adequate protection testing and as a result, the technical safeguards that they implement often fail to provide the coverage they are believed to provide. There are also several reasons for this, the most common being a lack of awareness that testing is necessary or appropriate and inadequate knowledge of how to do effective testing. These challenges can be met by improved understanding, but more often than not are met by simplistic methods like the use of security scanners and other similar efforts.
Underlying all of these examples is the fundamental notion of using risk management methods to make prudent decisions about relative levels of protection. Unfortunately, modern risk management techniques in information protection are inadequate to the task, and there are no clear cut quantitative measures that allow us to make prudent decisions about these tradeoffs. In fact, there isn't even a widely accepted list of what these tradeoffs are. It's hard to do risk management when you don't have the comprehensive list of options.
In the nuclear power industry, they have a big public relations problem. People think of nuclear power as being very risky, especially from the standpoint of enormous losses of life. It doesn't matter what the safety record of the nuclear industry is (it is in fact far better than any of the alternatives in terms of loss of life relative to energy generated), because it is not the actual risk that people act on, it is the perceived risk. The same problem is present in the information protection business.
In information protection, leaking of secrets is widely believed to be the greatest risk we face, and since cryptography is one way of protecting secrets, it seems obvious that better and better encryption is always worthwhile. Unfortunately, both the assumption and the conclusion are not supported by the facts. The assumption - that secrecy is the greatest risk - is wrong in most cases - and earlier articles in this series have discussed this at some length. The conclusions - that better encryption is always worthwhile - is wrong in many cases - even if the assumption were right - because even relatively trivial encryption makes the path of least resistance for most attackers divert from reading messages to subverting the operating environment.
The perception of risk associated with attack from over the Internet is very high, and while there are certainly threats in this area, statistics gathered year after year continue to show that insiders are involved in 80 percent of the losses known to be incurred from information-related crime. Thus we have Internet attack scanners in widespread use, but very little emphasis on internal protection from such techniques as configuration management and auditing. Again the perception of risk leads to poor judgment.
I don't want to enumerate all of these sorts of assumptions here, but I think the point is made. It is the perception of risk that drives risk management in most information protection programs, and it is the reality of risk that must be used in order to make better decisions.
Nobody knows! Having stuck my nose into the middle of this nasty subject, it's time to get it cut off. A very legitimate question is that posed in the title of this section. Unfortunately, my experience with risk management tells me that this is a very complicated issue and that nobody really knows the answer. Furthermore, the real risks change over time, so doing the job right involves predicting the future and investing for it - not something I can be very precise about.
If you think that looking at the past is a good way to predict the future, you're probably right, but not in the way you might think. Predicting the future based on the past is good in terms of things that repeat - such as human behavior. But in terms of technological trend specifics, it's a wild guess at best. We can reasonably predict that there will be technology fads - at least in the United States - largely because that has driven the market for a long time. We can also reasonably predict that executable content will increase, Web-based commerce will increase, and networking will become more endemic in the way we deal with information. No surprises here, nor should there be any. I cannot tell you whether the Linux or MacOS operating system will dominate over the next 20 years - but I'm pretty sure it won't be Windows - talk about going out on a limb!
Prudence lies somewhere between you can't get fired by buying IBM and I'm pretty sure it won't be Windows. Risk management has to do with placing your bets so you will not lose regardless of the real future. In some cases, that means not spending the money for prevention because you can't predict well enough, while in other cases, it means infrastructure protection investments for a 20 year time frame.
So the effective balancing act must take into account the range of potential futures, the time scales within which we have to adapt, and the budgetary constraints we live under. Then we have the threats both present and future, the vulnerabilities that we have to day and will likely have more of tomorrow, and of course the consequences of protection failures over time. It's getting complicated isn't it. So let's simplify.
I tend to take a simpler approach. Here it is:
Step 1: Except in extreme circumstances, all aspects of protection need to meet some standard of due care. If any of them are very weak, there are enough threats out there to exploit the vulnerabilities. Unless consequences are very low, and that includes liability issues, we need to have at least the standard level of protection. My standards tend to be fairly high compared to most. For example, I think that people should not be able to crawl in and out of a secured area under a computer floor undetected - but that's just me. Anything that easy should be blocked no matter what form it comes in - informational - systemic - or physical.
Step 2: Higher consequence lead to more attention. I often don't examine items in a corporate environment in significant detail simply because the consequences of anything that could happen involving those items are too inconsequential to investigate relative to the other items within the constraints of time and budget available for the task. The higher the value, the harder I work at understanding threats that might defeat the standard level of protection, vulnerabilities that are not fully covered by that protection, and consequences of those threats exploiting those vulnerabilities.
Step 3: Revisit everything often. Balancing risk is not a one-time affair. Like the person on the tight rope, constant adjustment is needed, and if you let yourself get too out of balance, you will not be able to recover before the fall. I constantly track changes in the environment and try to keep an understanding of how they might impact my previous decisions. When I come across something important, I make adjustments as soon as possible. I also track trends and try to think about what the future holds. I sometimes prepare for futures before they happen - but I also avoid over-committing.
Because it is so common today, we'll take a simple example from the cryptography vs. host security imbalance described earlier.
Suppose you are running a Web-based Internet commerce server. You advertise various items over the Web, take credit card information over the Web, process it through your back end computers, and deliver the goods and services you promise. Regardless of your total business volume over the Web, it has grown to be very important to your business.
Step 1: Due diligence: I usually use a list of areas that have to be covered and do an external review. My favorite list includes management, policy, standards, procedures, documentation, audit, testing, technical safeguards, personnel, incident handling, legal, physical, awareness, training, education, and organization. If you like, you can go with the BS7799 or GASSP standards instead. This sort of review process should produce lists of areas where coverage is strong, moderate, and weak. Anywhere coverage is weak, it should be brought up to moderate.
Step 2: High consequences: Identify the high consequences, validate that the threats and vulnerabilities are such that these consequences can reasonably happen, and determine whether these possibilities are adequately covered by the protection in place or planned. Where protection is not adequate and budget is available to mitigate the threats, do so.
Step 3: Revisit: In the Web server business, things are changing very rapidly. In essence, it has to be revisited on an ongoing fashion. This is done by getting on electronic mailing lists associated with vulnerabilities to the operating system, the Web server, and all other hardware and software related to the security requirements of your computing environment. Every time something new comes up, it has to be verified against your configuration to determine whether you are vulnerable, and a decision has to be made about how to handle it.
Where's the balance between cryptography and host security? To quote an old commercial - it's in there.
In step 1, you looked at technical safeguards, which normally includes both cryptographic protection and host security. If either was not up to standard, the should now be brought up to a reasonable level of protection. In most cases, if you were already using SSL, it is unlikely that improvements to the cryptographic protocol would be suggested, but if host security allowed remote access by exploiting a known sendmail hole, that should have been identified as very weak and should have been addressed. Similarly, if there was no cryptography in use and the server had strong operating system protection in place, it is likely that cryptographic coverage would be among the mitigation strategies suggested.
If you used BS7799, you would have encountered questions like "Where are special protections required for electronic data interchange and why? What protections are in place to protect those interchanges and how were they determined?" and "How do you make certain that the person at the other end of a dial-in or network-based access is who they claim to be? How do you make certain that once a connection is established, the user on the other end or the connection in between doesn't change? What is the basis for asserting that this level of assurance is adequate to the business need for protection?"
If you used the GASSP standard, you would have had to address this criteria: "Security levels, costs, measures, practices, and procedures should be appropriate and proportionate to the value of and degree of reliance on the information systems and to the severity, probability, and extent of the potential for direct and indirect harm."
In step 2, you might have decided that there were particularly high consequences associated with alteration of prices specified on your Web server. In this case, you would probably have decided that host security was far more important to mitigation than cryptographic protection of information in transit (you may have decided to use cryptographic checksums of course) and you would have proceeded to try to lean toward the fulfillment of the requirement.
In step 3, which is ongoing indefinitely, you continue to update host security and cryptographic protection to meet the need as vulnerabilities and threats change over time. Balance is achieved by improving what is weak.
Not quite. We have discussed the notion of balance in a security program and given some examples of typical imbalances and methods for re-balancing. But I would be remiss if I didn't mention the fact that large systems tend to have large momentum. Re-balancing may take a long time, and if it does, this means that we will never be able to stay on the tight rope. The most important thing to do in order to get and keep your protection balance is to find ways to become nimble in your information protection program. But that discussion is for another day.
About The Author:
Fred Cohen is a Principal Member of Technical Staff at Sandia National Laboratories and a Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at all.net or visiting /