The fundamental problem of protection policy is the development of guiding principles and/or plans of action which are expedient, prudent, and advantageous in keeping our symbolic representations from causing harm. In order for principles and/or plans to be expedient, they must be specific enough to allow implementation. In order for them to be prudent, they must be sound in terms of their implications. In order to be advantageous, they must approach some optimality in terms of the tradeoffs they imply.
Some of the policy issues addressed by current techniques include the prevention of information leaks and corruption, maintaining service availability, identification and authentication of individuals and systems to each other, maintaining accountability for actions, and providing electronic signatures. Future techniques will likely cover a wide variety of applications and address a wide variety of issues, but it is important to understand the state of the art in order to assess the feasibility of protection policies, the tradeoffs associated with their implementation, the issues that require further research, and the issues that are unlikely to ever be resolved.
When we speak of a protection policy in the technical sense, we are speaking of an explicit set of rules which govern the use of information in order to prevent it from causing harm. The precise specification of an explicit set of rules has considerable technical consequence. If the rules are impossible to implement, they will not lead to the desired protection. If they are inconsistent, they may result in undesired behavior because in an inconsistent system, anything can be proven to meet or violate policy depending on how rules are applied. We can thus only be guaranteed of chaotic behavior. If the rules are incomplete, there will be situations where the policy doesn't cover the behavior of the system. In these situations, we cannot decide, based on the policy, what will happen. This leads to unpredictable behavior. If the specification of the rules is imprecise, we cannot even tell whether they are consistent or complete, and we certainly cannot be certain of the resulting behavior.
In many cases, informal methods are sufficient, presumably because the cost involved with applying formal techniques to protection policy outweighs the benefits attainable through their application. It is almost always appropriate to seek some expert advice in the development of a protection policy, because experts tend to have more experience in knowing what must be considered in the development of such a policy and more knowledge about the state of the art than non experts. Identifying an expert is difficult without at least a little expertise, and the harm that can result from applying inadequate expertise may be quite severe.
The technical problem of precise specification of policy is often left to technical experts. In 'security' aspects of information protection, the result of the application of technical expertise to the problem of policy specification is called a 'formal security policy model' (i.e. FSPM). An FSPM is a mathematically precise statement of a security policy [Klein83] . In the more general case of a protection policy, we call a mathematically precise statement of policy a 'formal protection policy'.
When we use the term 'model' in a technical sense, we mean a 'symbolic representation of a policy'. Thus the use of models is closely tied to the development of policy. Information protection applies not only to the system for which we are devising rules, but also to the rules themselves. The mapping from desires of policy makers into a set of rules that model those desires is expensive and imprecise. As we tie ourselves down with models, we trade away flexibility for the stringent enforcement provided by precise rules. We also tie ourselves down to the context of the model, and may therefore be unable to achieve future policy desires at reasonable costs because of context boundedness.
Whenever we apply symbolic representations to the solution of problems, we are modeling the solution of problems with symbolic representations. The manner in which and degree to which we model, determines the accuracy of our models, and therefore the accuracy of our problem solutions. Inaccurate solutions may cause harm, and the degree to which they are inaccurate, presumably effects the degree to which they may cause harm. Modeling is an imprecise art, and many of the advances that have been made in our society are a direct result of the skillful application of erroneous results. There is a fine line between what we perceive as harmful, and what is actually harmful, and the blind application of symbolic representations to problem solutions does not help clarify this distinction.
For some limited situations, there are some fairly well understood models of information protection. When we speak of models in the formal sense, we will be addressing a few well understood symbolic representations which may be applicable in limited situations. We will attempt to specify in a fairly precise manner what situations these models are intended to model, and the effects of their use in the modeling of these situations, but we are by no means attempting to illicit blind faith responses to the modeling issue. In the 'security' area, blind faith responses and strict adherence to poorly understood rules is often the response we see, and in fact the response practitioners are usually trained to illicit. We simply present some models that have proven useful in the past and may prove useful in the future.
The specification of protection typically involves the transformation of the models specified in the FSPM (or in less formal situations from informal constraints), into a formal top level specification (FTLS) (or in the less formal case, a top level specification (TLS)). An FTLS is written in a formal mathematical language to allow theorems showing correspondence of the FTLS to the FSPM and the implementation to the FTLS. In the case of a TLS, this level of rigor is not available, but we would still like to be able to show in as formal a manner as possible, that the TLS meets the informal constraints, and that the implementation meets the TLS.
There is no fixed set of techniques for transforming an FSPM into an FTLS, nor is there a particularly good mathematical language for writing an FTLS or an FSPM. Even if there were such things, they would most likely only be applicable to small classes of policies operating under a restricted set of assumptions. There is a great deal of work under way which examines formal program verification, proof of correctness, and automatic program generation; [Hopcroft79] , (see references in [Klein83] , [Landwehr83] , [Popek79] , [Popek75] , and [Benzel84] ). A great deal of work remains to be done, even in the relatively restricted protection areas of preventing information leaks and corruptions, while more complex policy issues haven't been substantially addressed to date.
The implementation of protection systems generally involves the transformation of specifications into realizations. Many systems designers and implementers prefer to 'fly by the seat of their pants' in the implementation of protection systems because of the astounding successes attained in other fields by experimental methods. We are not intending to imply that experimental methods are out of place in information protection research. In point of fact, many aspects of the development of protection policies, models, specifications, and implementations are based on imperical methods. Nevertheless, if we want to enforce a policy with an implementation, the only hope for real enforcement appears, from empirical evidence described later in this book, to lie in formal methods.
There are currently few formal techniques for implementing specifications or verifying implementations, and those techniques that exist tend to cover only a small subset of the policies that may be of interest to a policy maker. Where formal methods fail, we are forced to use less formal methods of implementation, with a resulting degradation in the degree of protection provided.
Improper management of protection systems often causes substantial harm. It would be nice to have formal management methods designed to prevent information from causing harm, but again there are many more problems in this field than their are solutions. It is often possible to provide management tools to aid in the decision making process, particularly in technical areas such as risk analysis, configuration management, and day to day administrative operations.
As a fundamental issue, we should understand that without a human manager, it would be very difficult to keep humans from coming to harm, if only because harm is related to peoples mental as well as their physical well being. As long as there are choices to be made, there will be benefits and harm that result from those choices. The key to effective management of protection is a combination of understanding the implications of decisions, assistance in the prevention of catastrophic mistakes, and enhancements to the ability of the manager to make well informed decisions.
Things fall apart. We can attempt to reduce the chances of them falling apart over a given period of time by using highly reliable components and performing regular preventative maintenance. When things fall apart, we can either suffer with degraded conditions, or perform corrective maintenance to return them to non degraded conditions. Systems which act to prevent harm can fail to provide the desired protection if and when they fall apart. We therefore must maintain protection systems or suffer the consequences of increasingly degraded protection. The field of fault tolerant computing [Siewiorek82] and safety engineering use many techniques for assuring protection, but there are many problems that are yet to be resolved.