Managing Network Security

Why it was done that way.

by Fred Cohen



Series Introduction

Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.


The Millenium Article

I have thought long and hard about what to write as my final piece of the millenium, and I think that my selection reflects the notion that history is as much a part of our future as it is of our past. I am not going to talk about the future of informaiton protection today as I am going to talk about the past - how we have ignored the lessons of history - and how our rush to forget the past is leading us to an uncertian future.

The question 'Why?' is the central focus of today's discussion and I feel I should start by noting that, in the last 15 years or so, I have almost never seen a single piece of documentation associated with a program that tells my why things were done the way they were done. I know that in the few 'more important' programs I have written, I have been very explicit about precisely why things were done the way they were done. For example, the documentaiton for the secure web server I implemented and made available over the Internet for all to use more than 5 years ago, is very detailed as to why what was done where. It has since been mathematically proven to meet various of its asserted properties, and I was pleased that the proof process did not find any serious flaws (although it did find an off-by-one error, the 'whys' of the program included a provision for such a possibility).

The why, it turns out, is often more important than the what or the how - at least from a security perspective. In lots of cases, we make a back-of-the-envelope calculation, use it as a basis for some decision, and fail to pass on the details of the why. Some time later, the next person in the chain of evolution of a system who doesn't know the basis for the selection proceeds without questioning it, and extends it to an applicaiton is was never intended to deal with. As the chain gets longer and longer, we soon find that the why is lost, and we start to make poorer and poorer decisions, until the original why is no longer even close to being right. When things fall over, we then hear the common exclamation in disgust about how foolish the past programmer was to make such a stupid mistake - when in fact, it was the lack of understanding 'why' that caused those later in the chain to void the assumptions that the original decision was based on.

The notion of why is not just important in software development - and of course absolutely critical in security software development - it is central to the way we do risk management - and therefore central to all aspects of security. Any time we turn 'why's into rote learning or checklists, we lose the underlying reason for making decisions.


I Hear Your Cry

I understand that checklists are valuable things to have. We all know that they help make sure we don't miss things that are fairly obvious, and they are helpful in reminding us to look at issues that are not always so obvious - especially for those of us without photographic memories. But just because we have checked every box doesn't mean we are safe. In fact, when you understand the why, you may decide that some boxes shouldn't be checked off - that there are cases when doing the things necessary to truthfully check the box are not the right things to do in this particular situation. That's why you have to know why.

For example... Suppose we have a checklist that says that all passwords must be at least 8 characters long and include upper and lower case letters and numbers and special characters. It looks like a pretty good rule. But passwords exist for a reason, and once you ask the question 'why', you may decide that the computer in a physically secure room that requires retinal authentication for entry, has an armed guard outside at all times, and has a built-in 7-letter hardware password that can be replaced by a compliant - but inherently weaker software password for $8000 (that includes the cost of auditing the software, installing it with a trusted installation team, and so forth), you may decide that the rule doesn't apply here.

But that was a real special case example, and I don't want you to think that asking why is only for special cases.


Why not automate email reading?

The promotions have been around for some time now. Some day soon, your personal digital assistant will review your emails for you and tell you the important things. It sounds fantastic. But there are very good historical reasons that such a thing would be extranordinarily dangerous unless it was a very carefully controlled process - something we are not likely to see in the commercial systems of today or the forseeable future.

The reason should be clear from the 1988 case in which IBM mainframe computers all around the world were brought down. The IBM Christmas card virus depended on a different kind of automated email reading - a culture in which people were used to getting email Christmas cards in the form of command scripts and running them to see what their colleague had sent them in the way of a nice card that year. After that, the culture changed - for a while. Fast forward to 1999 and we have the Melissa virus - the same 11-year old idea implemented in a slightly different command language - Word macros. But to understand why, we have to go back a little bit further.

Alan Turing proved more than 50 years ago that one general purpose programming language is more or less the same as another - and that no program could be trusted to tell a good program from a bad one. And more than 25 years ago, Butler Lampson - with help from Claude Shannon's work of more than 50 years ago - told us that confining a program could not really be done in a system with sharing as we do things today. And more than 15 years ago, even I knew that macro viruses made the emailing of programs a dangerous thing.


Do You Know These People?

If you don't know the details of the work of the four authors listed above, you should not be allowed to make computer security decisions! That sounds pretty arrogant, but I am really serious about it. I'm not telling you this because I am old or because I think these papers are great - both of which are true. It is because the things they put forth are so fundamental to the nature of making computer security decisions that without a clear understanding of what they say, I do not believe you can make prudent decisions. I know that you can probably get the same information elsewhere, but I have seen how the game of telephone works, and I believe that going to the original source is worth the effort in these cases.

There are a lot of other authors and papers you should also know about if you are going to make good decisions, and I am - mercifully - not going to list them all here. My point, however, remains the same. In order to make good decisions, you need to understand why decisions are and were made. That means reading the history behind those decisions - which means understanding the context of the time when the decision-maker made those decisions.


Fast Forward to Today

Everybody today is rushing toward more computer networking. The military is doing it, banking concerns are doing it, and I'm sure that lots of other people are doing it. If it's not connected, it will be - that's the cry of the information technology visionary - it's the subject of lots of books - and it seems to be the fashion and the philosophy of the information age.

Now a lot of the systems being attached to these networks were designed to be adequately secure... in the environment assumed by their designers. Read "environment" as "stand alone". No computer system I am aware of today is secure in a networked environment. And the notion that a computer system is secure is an oxymoron anyway. Computer systems are not secure - the total system is the thing that can be secure - or more properly put - we can manage overall systems so as to effectively balance risk with benefit.

The fact is, we don't really know how to balance much in the way of networking with much in the way of potential for harm. Small amounts of harm with a modicum of networking is no problem. We just put lots of stuff out there and don't worry about it. When something goes wrong, we fix it, and as long as not too much goes wrong too often, we are fine. When big bad things happen, since they are usually not really sophisticated, we have an outside vendor fix them for us. Risk managed. When high value is involved, the security management community has shown again and again our inability to manage the risks.


Conclusions

In the future that the visionaries are putting forth for us today, the information age will be thrust upon us, and we will like it. I admit it, I like being able to find what I am looking for in seconds from my desk instead of in hours or days or weeks with the requirement to fly all over the place and the likelihood that I will miss a lot of it. I like buying things over the Internet - especially books and such - especially when time is less important than money and I can afford to buy some things I might not end up using. I'm sure the sellers like that part too.

But the future we are creating is not the one of my visions. I don't believe in technology for technology's sake. I think that much of our technology is just plain wasted, and a lot of it produces waste. I believe that we rush head long into things without thinking about them or bothering to look up what others have done before us. I think that, as a group, information technologists are arrogant about their technology and their skills and that they are falling over much of the time because they refuse to stand on the shoulders of the giants that came before them.

So the millenium article - and the millenium are just about over. Not really. Those of you that know about such things have probably been bothered throughout this article by the fact that the millenium is still a year away from ending. This is great! It means that I get to be accurate in the end and I also get to write another millenium article a year from now.

Sleep well, have a safe new year, century, thousand year period (not millenium), and always remember that the old ways are - at a minimum - old.


About The Author:

Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection, and a practitioner in residence in the University of New Haven's Forensic Sciences Program, where he educates cybercops on digital forensics. He can be reached by sending email to fred at all.net or visiting /