Principles in analysis design, implementation and operation to ensure secure IT systems

From the course: CISSP Cert Prep (2021): 3 Security Architecture and Engineering

Secure design principles

- [Instructor] System engineering is an important technology discipline where practitioners are charged with taking many different and complex technical components and assembling them into a functional system that meets business objectives and security requirements at the same time. We'll be exploring many different aspects of security engineering throughout this course, including many of the technologies you used to build secure systems, but let's begin with a look at some general security engineering principles, including incorporating security in the design process, the Subject/Object Model, failure modes, isolation, and validation. Security must be treated as a key component of the design process. If you look back at major security failures over the past decade, many shared a common theme. Some aspect of security was either entirely overlooked or treated as an afterthought after the system design was complete. In this approach known as bolt-on security, security engineers attempt to retrofit an existing system with security features designed to protect confidentiality, integrity, and availability. While this approach is sometimes successful, it is much less effective and much more expensive than simply including security in the design from the start. One of the core security models underlying many system designs is the Subject/Object Model of computer security. In this approach, every access request is seen as having two different components, a subject who is requesting access to a resource and an object, which is the resource being requested. For example, if a user requests access to a file, the user is the subject of the request and the file is the object of the request. Similarly, a process might request access to a memory location. In this example, the process is the subject of the request and the memory contents are the object. When designing a system, it's often helpful to describe access requests in these terms, making the identity of the subject and object explicitly clear for each request. Working in this way allows the design of access control systems with easily defined and implemented rules. Another important security decision is how a system should behave in the event of a failure. Now there are two possible failure modes. In a fail open system, if the security controls fail, they are automatically bypassed. This approach favors continued business operations, but it may pose an unacceptable security risk. In a fail secure or fail closed system, if a security control fails, the system locks itself down to a state where no access is granted. An easy way to understand this is by imagining a firewall at the perimeter of a network. If the firewall fails and it's configured in a fail open state, the network will continue operating after the failure without the benefit of firewall protection. Now, this probably is not a good idea and firewalls are usually configured in a fail secure state that blocks traffic flow in the event of a failure. As a counterexample, an intrusion detection system may play a less critical role in cybersecurity programs. While it is highly desirable to inspect traffic for signs of malicious activity, administrators might decide that even in the event of an intrusion detection system failure, the cost of disrupting network activity doesn't warrant putting the system in a fail secure mode. Another core principle that you'll find throughout discussions of security engineering are the concepts of isolation and segmentation. In many cases, engineers design secure systems so that different components can't communicate with each other unless absolutely necessary. Let's look at a few examples. In the case of network segmentation, an organization's accounting department, executive team, and sales team might reside on different network segments to protect their computers from each other. This way, an intruder gaining access to one network segment would still have some work to do before compromising another network segment. Inside a computing system, process isolation ensures that different programs don't have access to each other. This isolates software so that one malicious program has a harder time affecting other processes running on the same system. Along those same lines, systems also perform memory segmentation to prevent different processes from accessing memory assigned to other processes. And finally, in the world of virtualization, the hypervisor is responsible for virtual machine isolation, ensuring that different virtual machines don't have access to each other's resources.

Contents

This chapter is from the book

13.2 Design Principles

The principles of secure design discussed in this section express common-sense applications of simplicity and restriction in terms of computing. We will discuss detailed applications of these principles throughout the remainder of Part 5, and in Part 8, "Practicum." However, we will mention examples here.

13.2.1 Principle of Least Privilege

This principle restricts how privileges are granted.

Definition 13–1.

The principle of least privilege states that a subject should be given only those privileges that it needs in order to complete its task.

If a subject does not need an access right, the subject should not have that right. Furthermore, the function of the subject (as opposed to its identity) should control the assignment of rights. If a specific action requires that a subject's access rights be augmented, those extra rights should be relinquished immediately on completion of the action. This is the analogue of the "need to know" rule: if the subject does not need access to an object to perform its task, it should not have the right to access that object. More precisely, if a subject needs to append to an object, but not to alter the information already contained in the object, it should be given append rights and not write rights.

In practice, most systems do not have the granularity of privileges and permissions required to apply this principle precisely. The designers of security mechanisms then apply this principle as best they can. In such systems, the consequences of security problems are often more severe than the consequences for systems that adhere to this principle.

Example

The UNIX operating system does not apply access controls to the user root. That user can terminate any process and read, write, or delete any file. Thus, users who create backups can also delete files. The administrator account on Windows has the same powers.

This principle requires that processes should be confined to as small a protection domain as possible.

Example

A mail server accepts mail from the Internet and copies the messages into a spool directory; a local server will complete delivery. The mail server needs the rights to access the appropriate network port, to create files in the spool directory, and to alter those files (so it can copy the message into the file, rewrite the delivery address if needed, and add the appropriate "Received" lines). It should surrender the right to access the file as soon as it has finished writing the file into the spool directory, because it does not need to access that file again. The server should not be able to access any user's files, or any files other than its own configuration files.

13.2.2 Principle of Fail-Safe Defaults

This principle restricts how privileges are initialized when a subject or object is created.

Definition 13–2.

The principle of fail-safe defaults states that, unless a subject is given explicit access to an object, it should be denied access to that object.

This principle requires that the default access to an object is none. Whenever access, privileges, or some security-related attribute is not explicitly granted, it should be denied. Moreover, if the subject is unable to complete its action or task, it should undo those changes it made in the security state of the system before it terminates. This way, even if the program fails, the system is still safe.

Example

If the mail server is unable to create a file in the spool directory, it should close the network connection, issue an error message, and stop. It should not try to store the message elsewhere or to expand its privileges to save the message in another location, because an attacker could use that ability to overwrite other files or fill up other disks (a denial of service attack). The protections on the mail spool directory itself should allow create and write access only to the mail server and read and delete access only to the local server. No other user should have access to the directory.

In practice, most systems will allow an administrator access to the mail spool directory. By the principle of least privilege, that administrator should be able to access only the subjects and objects involved in mail queueing and delivery. As we have seen, this constraint minimizes the threats if that administrator's account is compromised. The mail system can be damaged or destroyed, but nothing else can be.

13.2.3 Principle of Economy of Mechanism

This principle simplifies the design and implementation of security mechanisms.

Definition 13–3.

The principle of economy of mechanism states that security mechanisms should be as simple as possible.

If a design and implementation are simple, fewer possibilities exist for errors. The checking and testing process is less complex, because fewer components and cases need to be tested. Complex mechanisms often make assumptions about the system and environment in which they run. If these assumptions are incorrect, security problems may result.

Example

The ident protocol [861] sends the user name associated with a process that has a TCP connection to a remote host. A mechanism on host A that allows access based on the results of an ident protocol result makes the assumption that the originating host is trustworthy. If host B decides to attack host A, it can connect and then send any identity it chooses in response to the ident request. This is an example of a mechanism making an incorrect assumption about the environment (specifically, that host B can be trusted).

Interfaces to other modules are particularly suspect, because modules often make implicit assumptions about input or output parameters or the current system state; should any of these assumptions be wrong, the module's actions may produce unexpected, and erroneous, results. Interaction with external entities, such as other programs, systems, or humans, amplifies this problem.

Example

The finger protocol transmits information about a user or system [1072]. Many client implementations assume that the server's response is well-formed. However, if an attacker were to create a server that generated an infinite stream of characters, and a finger client were to connect to it, the client would print all the characters. As a result, log files and disks could be filled up, resulting in a denial of service attack on the querying host. This is an example of incorrect assumptions about the input to the client.

13.2.4 Principle of Complete Mediation

This principle restricts the caching of information, which often leads to simpler implementations of mechanisms.

Definition 13–4.

The principle of complete mediation requires that all accesses to objects be checked to ensure that they are allowed.

Whenever a subject attempts to read an object, the operating system should mediate the action. First, it determines if the subject is allowed to read the object. If so, it provides the resources for the read to occur. If the subject tries to read the object again, the system should check that the subject is still allowed to read the object. Most systems would not make the second check. They would cache the results of the first check and base the second access on the cached results.

Example

When a UNIX process tries to read a file, the operating system determines if the process is allowed to read the file. If so, the process receives a file descriptor encoding the allowed access. Whenever the process wants to read the file, it presents the file descriptor to the kernel. The kernel then allows the access.

If the owner of the file disallows the process permission to read the file after the file descriptor is issued, the kernel still allows access. This scheme violates the principle of complete mediation, because the second access is not checked. The cached value is used, resulting in the denial of access being ineffective.

Example

The Domain Name Service (DNS) caches information mapping host names into IP addresses. If an attacker is able to "poison" the cache by implanting records associating a bogus IP address with a name, one host will route connections to another host incorrectly. Section 14.6.1.2 discusses this in more detail.

13.2.5 Principle of Open Design

This principle suggests that complexity does not add security.

Definition 13–5.

The principle of open design states that the security of a mechanism should not depend on the secrecy of its design or implementation.

Designers and implementers of a program must not depend on secrecy of the details of their design and implementation to ensure security. Others can ferret out such details either through technical means, such as disassembly and analysis, or through nontechnical means, such as searching through garbage receptacles for source code listings (called "dumpster-diving"). If the strength of the program's security depends on the ignorance of the user, a knowledgeable user can defeat that security mechanism. The term "security through obscurity" captures this concept exactly.

This is especially true of cryptographic software and systems. Because cryptography is a highly mathematical subject, companies that market cryptographic software or use cryptography to protect user data frequently keep their algorithms secret. Experience has shown that such secrecy adds little if anything to the security of the system. Worse, it gives an aura of strength that is all too often lacking in the actual implementation of the system.

Keeping cryptographic keys and passwords secret does not violate this principle, because a key is not an algorithm. However, keeping the enciphering and deciphering algorithms secret would violate it.

Issues of proprietary software and trade secrets complicate the application of this principle. In some cases, companies may not want their designs made public, lest their competitors use them. The principle then requires that the design and implementation be available to people barred from disclosing it outside the company.

Example

The Content Scrambling System (CSS) is a cryptographic algorithm that protects DVD movie disks from unauthorized copying. The DVD disk has an authentication key, a disk key, and a title key. The title key is enciphered with the disk key. A block on the DVD contains several copies of the disk key, each enciphered by a different player key, and a checksum of the disk key. When a DVD is inserted into a DVD player, the algorithm reads the authentication key. It then deciphers the disk keys using the DVD player's unique key. When it finds a deciphered key with the correct hash, it uses that key to decipher the title key, and it uses the title key to decipher the movie [971]. (Figure 13–1 shows the layout of the keys.) The authentication and disk keys are not located in the file containing the movie, so if one copies the file, one still needs the DVD disk in the DVD player to be able to play the movie.

Principles in analysis design, implementation and operation to ensure secure IT systems
Figure 13–1 DVD key layout. KA is the authentication key, KT the title key, KD the disk key, and KPi the key for DVD player i. The disk key is enciphered once for each player key.

In 1999, a group in Norway acquired a (software) DVD playing program that had an unenciphered key. They also derived an algorithm completely compatible with the CSS algorithm from the software. This enabled them to decipher any DVD movie file. Software that could perform these functions rapidly became available throughout the Internet, much to the discomfort of the DVD Copyright Control Association, which promptly sued to prevent the code from being made public [783, 798]. As if to emphasize the problems of providing security by concealing algorithms, the plaintiff's lawyers filed a declaration containing the source code of an implementation of the CSS algorithm. When they realized this, they requested that the declaration be sealed from public view. By then, the declaration had been posted on several Internet sites, including one that had more than 21,000 downloads of the declaration before the court sealed it [671].

13.2.6 Principle of Separation of Privilege

This principle is restrictive because it limits access to system entities.

Definition 13–6.

The principle of separation of privilege states that a system should not grant permission based on a single condition.

This principle is equivalent to the separation of duty principle discussed in Section 6.1. Company checks for more than $75,000 must be signed by two officers of the company. If either does not sign, the check is not valid. The two conditions are the signatures of both officers.

Similarly, systems and programs granting access to resources should do so only when more than one condition is met. This provides a fine-grained control over the resource as well as additional assurance that the access is authorized.

Example

On Berkeley-based versions of the UNIX operating system, users are not allowed to change from their accounts to the root account unless two conditions are met. The first condition is that the user knows the root password. The second condition is that the user is in the wheel group (the group with GID 0). Meeting either condition is not sufficient to acquire root access; meeting both conditions is required.

13.2.7 Principle of Least Common Mechanism

This principle is restrictive because it limits sharing.

Definition 13–7.

The principle of least common mechanism states that mechanisms used to access resources should not be shared.

Sharing resources provides a channel along which information can be transmitted, and so such sharing should be minimized. In practice, if the operating system provides support for virtual machines, the operating system will enforce this privilege automatically to some degree (see Chapter 17, "Confinement Problem"). Otherwise, it will provide some support (such as a virtual memory space) but not complete support (because the file system will appear as shared among several processes).

Example

A Web site provides electronic commerce services for a major company. Attackers want to deprive the company of the revenue it obtains from that Web site. They flood the site with messages and tie up the electronic commerce services. Legitimate customers are unable to access the Web site and, as a result, take their business elsewhere.

Here, the sharing of the Internet with the attackers' sites caused the attack to succeed. The appropriate countermeasure would be to restrict the attackers' access to the segment of the Internet connected to the Web site. Techniques for doing this include proxy servers such as the Purdue SYN intermediary [893] or traffic throttling (see Section 26.4, "Availability and Network Flooding"). The former targets suspect connections; the latter reduces the load on the relevant segment of the network indiscriminately.

13.2.8 Principle of Psychological Acceptability

This principle recognizes the human element in computer security.

Definition 13–8.

The principle of psychological acceptability states that security mechanisms should not make the resource more difficult to access than if the security mechanisms were not present.

Configuring and executing a program should be as easy and as intuitive as possible, and any output should be clear, direct, and useful. If security-related software is too complicated to configure, system administrators may unintentionally set up the software in a nonsecure manner. Similarly, security-related user programs must be easy to use and must output understandable messages. If a password is rejected, the password changing program should state why it was rejected rather than giving a cryptic error message. If a configuration file has an incorrect parameter, the error message should describe the proper parameter.

Example

The ssh program [1065] allows a user to set up a public key mechanism for enciphering communications between systems. The installation and configuration mechanisms for the UNIX version allow one to arrange that the public key be stored locally without any password protection. In this case, one need not supply a password to connect to the remote system, but will still obtain the enciphered connection. This mechanism satisfies the principle of psychological acceptability.

On the other hand, security requires that the messages impart no unnecessary information.

Example

When a user supplies the wrong password during login, the system should reject the attempt with a message stating that the login failed. If it were to say that the password was incorrect, the user would know that the account name was legitimate. If the "user" were really an unauthorized attacker, she would then know the name of an account for which she could try to guess a password.

In practice, the principle of psychological acceptability is interpreted to mean that the security mechanism may add some extra burden, but that burden must be both minimal and reasonable.

Example

A mainframe system allows users to place passwords on files. Accessing the files requires that the program supply the password. Although this mechanism violates the principle as stated, it is considered sufficiently minimal to be acceptable. On an interactive system, where the pattern of file accesses is more frequent and more transient, this requirement would be too great a burden to be acceptable.

What are the principles to a secure information system?

What are the 3 Principles of Information Security? The basic tenets of information security are confidentiality, integrity and availability. Every element of the information security program must be designed to implement one or more of these principles.

What are the 8 security design principles?

The eight design principles are:.
Principle of Least Privilege. ... .
Principle of Fail-Safe Defaults. ... .
Principle of Economy of Mechanism. ... .
Principle of Complete Mediation. ... .
Principle of Open Design. ... .
Principle of Separation of Privilege. ... .
Principle of Least Common Mechanism. ... .
Principle of Psychological Acceptability..

How do you implement secure by design?

Those 5 key elements are:.
Govern the environment with policies and standards. · ... .
Train the teams on security requirements. · ... .
Provide automation and self-service capabilities. · ... .
Develop strong security monitoring capabilities. · ... .
Review and Improve. ·.

Which are the three key principles which should be guaranteed in any kind of secure system?

The CIA triad refers to an information security model made up of the three main components: confidentiality, integrity and availability. Each component represents a fundamental objective of information security.