Saturday, April 19, 2008


Friday, January 11, 2008

Security by Design

The technologies of computer security are based on logic. There is no universal standard notion of what secure behavior is. "Security" is a concept that is unique to each situation. Security is extraneous to the function of a computer application, rather than ancillary to it, thus security necessarily imposes restrictions on the application's behavior.

There are several approaches to security in computing, sometimes a combination of approaches is valid:

  1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
  2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
  3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
  4. Trust no software but enforce a security policy with trustworthy mechanisms.

Many systems unintentionally result in the first possibility. Approaches one and three lead to failure. Since approach two is expensive and non-deterministic, its use is very limited. Because approach number four is often based on hardware mechanisms and avoid abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four.

There are myriad strategies and techniques used to design security systems. There are few, if any, effective strategies to enhance security after design.

One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.

Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.

The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.

Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.

In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.

Early history of security by Design

The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has become widely known as a non-terminating process that fails to produce computer security. This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.

Secure Coding

If the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, et al.). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion.

In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection.

Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion.

Recently another bad coding practise has come under scrutiny; dangling pointers. The first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable.

In summary, 'secure coding' can provide significant payback in low security operating environments, and therefore worth the effort. Still there is no known way to provide a reliable degree of subversion resistance with any degree or combination of 'secure coding.'

Secure Operating Systems

One use of the term computer security refers to technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is almost inactive today, perhaps because it is complex or not widely understood. Such ultra-strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such a Computer security policy is the Bell-LaPadula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the memory management unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical.

Systems designed with such methodology represent the state of the art of computer security and the capability to produce them is not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information and military secrets. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security capability (as Protection Profile) and assurance levels (as EAL levels.) None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria.

Computer Security

Computer security is a branch of information security applied to both theoretical and actual computer systems. Computer security is a branch of computer science that addresses enforcement of 'secure' behavior on the operation of computers. The definition of 'secure' varies by application, and is typically defined implicitly or explicitly by a security policy that addresses confidentiality, integrity and availability of electronic information that is processed by or stored on computer systems.

The traditional approach is to create a trusted security kernel that exploits special-purpose hardware mechanisms in the microprocessor to constrain the operating system and the application programs to conform to the security policy. These systems can isolate processes and data to specifier domains and restrict access and privileges of users. This approach avoids trusting most of the operating system and applications.

In addition to restricting actions to a secure subset, a secure system should still permit authorized users to carry out legitimate and useful tasks. It might be possible to secure a computer against misuse using extreme measures:

The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.
Eugene H. Spafford, director of the Purdue Center for Education and Research in Information Assurance and Security.

It is important to distinguish the techniques used to increase a system's security from the issue of that system's security status. In particular, systems which contain fundamental flaws in their security designs cannot be made secure without compromising their usability.[citation needed] Most computer systems cannot be made secure even after the application of extensive "computer security" measures. Furthermore, if they are made secure, functionality and ease of use often decreases.

Computer security can also be seen as a subfield of security engineering, which looks at broader security issues in addition to computer security.

Network Tomography

Network tomography is the study of a network's internal characteristics using information derived from end point data. The word tomography is used to link the field, in concept, to other processes that infer the internal characteristics of an object from external observation, as is done in magnetic resonance imaging or positron emission tomography. The field is a recent development in electrical engineering and computer science, founded in 1996. Network tomography advocates that it is possible to map the path data takes through the Internet by examining information from "edge nodes," the computers where data is originated and requested from.

The field is useful for engineers attempting to develop more efficient computer networks. Data derived from network tomography studies can be used to increase quality of service by limiting link packet loss and increasing routing optimiztion.

Internalnet

An internalnet is a computer network composed of devices inside and on the human body. Such a system could be used to link nanochondria, bionic implants, wearable computers, and other devices.

Interfaces and their use in Ambient Network

The ACS (Ambient Control Space) is the internal of an Ambient Network. It has the functions that can be accessed and it is in full control of the resources of the network. The Ambient Networks infrasturcture does not deal with nodes, instead it deals with networks, though at the beginning, all the "networks" might only consist of just one node: these "networks" need to merge in order to form a network in the original sense of the word. A composition establishment consists of the negotiation and then the realization of a Composition Agreement. This merging can happen be fully automatic. The decision to merge or not is decided using pre-configured policies.

There are three interfaces present to communicate with an ACS. These are:

  • ANI: Ambient Network Interface. If a network wants to join in, it has to do so through this interface.
  • ASI: Ambient Service Interface. If a function needs to be accessed inside the ACS, this Interface is used.
  • ARI: Ambient Resource Interface. If a resource inside a network needs to be accessed (e.g. the volume of the traffic), this interface is used.

Interfaces are used in order to hide the internal structures of the underlying network.

If two networks meet, and decide to merge, a new ACS will be formed of the two (though the two networks will have their own ACS along with the interfaces inside this global, new ACS). The newly composed ACS will of course have its own ANI, ASI and ARI, and will use these interfaces in order to merge with other Ambient Networks. Other options for composition are to not merge the two Ambient Networks (Network Interworking) or to establish a new virtual ACS that exercises joint control over a given set of shared resources (Control Sharing).