Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection program success depends more on prudent management decisions than on the selection of technical safeguards. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
Most people who hire people to write security-critical programs believe that one programmer is more or less the same as another programmer. C programmer - 5 years of experience - they're all the same.
This could not be further from the truth. Programmers who write good security-critical software are a special breed with special training and special skills. In order to understand this, you probably need to start by understanding something about the difference between security critical programs and other programs. And in order to start to understand this, you need to start with some notion about what we expect from security critical programs.
Here is a short list of some of the things I expect from security-critical programs in some environments. Compare it to the normal programming specifications you use and normal programming experiences of your programmers and consider whether they know how to do this:
The program is no larger than it absolutely must be to do the job.
The program allocated no memory once execution begins.
All inputs and input sequences are validated as to syntax, and semantics.
For any sequence of inputs, the program is guaranteed to perform a known calculation in a finite and bounded amount of time and then terminate.
All information flows within the program are defined by the programmer and demonstrated to only affect things that must be affected by them in only the legitimate ways they can have these affects.
The program uses redundant protective measures to assure not only that errors do not happen but that if they do happen they are caught and appropriate responses are made.
Every possible response from every system call or library call is well defined and properly handled. This includes responses that would not seem possible based on results of previous system calls.
The interactions with the rest of the operating environment are thoroughly understood and the program handles all such interactions and resulting error conditions appropriately.
All code depends on environmental factors that must be identified so that the code can be evaluated in the context of its environment.
All lengths of all inputs, outputs, and variables are defined and all operations on all of those items are guaranteed to fit within the storage areas allocated for their use.
The program specifically identified the length of all entities unless it uses length -independent coding techniques, and all lengths are architecture compensated or independent.
All compiler-specific and system-specific calls are avoided wherever possible and where they are required they are identified and placed in separate modules.
All code elements have test programs that verify both their proper operation under legitimate input and state conditions and their lack of improper operation under illegitimate input or state conditions, and the coverage of these test sets is known and identified.
All code has been reviewed by others for simplicity, straight forwardness, proper operation, and all of the other properties identified above.
Privileges associated with code in context is minimized both in size and in time.
The ordering of all code elements is identified in terms of why the ordering selected was selected.
No constants within the program are unexplained and any constant that appears twice with the same semantic meaning is identified and defined as a constant in the program.
Every line of code can be readily explained as to what it does, why it does it that way, and why there is no better way to do it.
All operations are audited so that a complete trace of all actions taken can be readily generated to track effects of all actions.
Every protection setting of every call grants the least privilege possible to do the job and every setting has been tested and verified as to this minimization.
All failures must produce identified and predetermined fail safe responses.
All design parameters, such as processing rates and lengths of buffers, should be calculated based on identified and clearly stated assumptions. When and if those assumptions change, the documentation should describe how to adapt to them. This should include all of the dependencies that those changes generate in the rest of the program. At a minimum, the extent to which changes can be made before they have larger implicaitons must be made explicit. For example, changes in constants may indirectly effect the maximum values of computed results, forcing a change in the word size. The documentation has to indicate this or seemingly minor changes will have major effects.
Forward referenced functions will be avoided whenever possible (except when there are co-routines) and in this case the co-routines will be placed together with an explanation.
I also expect that every program will follow all of the principles of the GASSP standards. If the programmers don't know what these are, it is highly unlikely that they can fulfill this requirement.
This is not harsh - it is just a starting point. I also expect that my programmers will be incredibly cleaver in their ability to make things that seem complicated as simple as they can be - but no simpler. I expect them to be intimately familiar with program optimization and to be able to verify that the assembly code produced by their compilers is in keeping with the semantics they thought their programs was to create. I expect that they will come up with cleaver and correct ways to solve hard programming problems and to be able to explain what they did, why they did it, and what the tradeoffs are associated with their selections of methods.
If your programmers think this is harsh, don't let them write security-relevant programs. If you do, you will end up with poor security and poorly functioning programs.
Every line of program code that is security-relevant should be reviewed by an independent expert code reviewer who will require that all of the above requirements are met, that the reviewer understands how and why the program operates as it does, and that the reviewer has checked all of the calculations and assumptions of the authors and tested them.
In code reviews, the code reviewer stops the review at the first point where something is no longer clear until such time as it is made clear or corrected. The review can then continue. After a small finite number of these stoppages, the code review is halted and the programmer is to address all such questions before the review will be restarted - and when it is restarted it will be restarted from scratch.
Code that has passed review will be change controlled from that time forward. changes that impact previous parts of the review will require that the review restart from the earliest change. Because of the complexity associated with this backtracking, it will be tolerated only rarely and not more than few times before code review stops and has to be restarted.
I expect that once I have a production system, I will never - and I mean never ever ever - have to make an emergency code change. The only changes I should ever have to make are 'upgrades' for new requirements (which should be quite rare) and changes to design parameters to meet changes in the operating environment.
Change control will be based on sound change control principles. If you don't know how to do sound change control, you are not the right person to be running a security programming operation. They include not allowing the change control person to alter the code, not allowing any updates to Turing capable interpretation without going through change control, only passing change controlled source code, and on and on.
Of course the change control process is generally estimated to cost as much as the original programming tasks - but that's the cost of getting it right.
Security-critical programs are only as critical as what they are intended to secure. The effort required to have high assurance levels is not always justified by the application or the circumstances. But the programmers who do this work need to know and understand these issues well in order to be effective at writing security critical software, even if all of these don't get applied every time.
It takes a long time and special training and experience to get to the point when you can really do this sort of programming well, and this sort of programming is very different from the normal sorts of programming you get from the 5-years of experience C programmer. In addition, not all programs are security relevant in the same extreme as other programs. A firewall is likely more security critical than a typical application program because it is subjected to more severe threats and has higher consequences of failure.
While this small piece hasn't solved the problem of building programmers for writing security critical code, I hope it helps to clear up some of the misimpressions that people have about all programmers being more or less equal. Security programmers are a special breed.
About The Author:
Fred Cohen is helping clients meet their information protection needs at Fred Cohen & Associates and Security Posture and doing research and education as a Research Professor in the University of New Haven's Forensic Sciences Program. He can be reached by sending email to fred at all.net or visiting http://all.net/