Computer Security:
Principles and Practice
Fourth Edition
By: William Stallings and Lawrie Brown
Lecture slides prepared for “Computer Security: Principles and Practice”, 4/e, by William Stallings and Lawrie Brown, Chapter 1 “Overview”.
1
Chapter 1
Overview
This chapter provides an overview of computer security. We begin with a discussion
of what we mean by computer security. In essence, computer security deals
with computer-related assets that are subject to a variety of threats and for which
various measures are taken to protect those assets. Accordingly, the next section of
this chapter provides a brief overview of the categories of computer-related assets
that users and system managers wish to preserve and protect, and a look at the
various threats and attacks that can be made on those assets. Then, we survey the
measures that can be taken to deal with such threats and attacks. This we do from
three different viewpoints, in Sections 1.3 through 1.5. We then lay out in general
terms a computer security strategy.
The focus of this chapter, and indeed this book, is on three fundamental
questions:
1. What assets do we need to protect?
2. How are those assets threatened?
3. What can we do to counter those threats?
2
The NIST Internal/Interagency Report NISTIR 7298 (Glossary of Key Information Security Terms , May 2013) defines the term computer security as follows:
“ Measures and controls that ensure confidentiality, integrity, and availability of information system
assets including hardware, software, firmware, and information being processed, stored, and communicated.”
3
The NIST Internal/Interagency Report NISTIR 7298 (Glossary of Key Information
Security Terms , May 2013) defines the term computer security as follows:
Computer Security: Measures and controls that ensure confidentiality, integrity,
and availability of information system assets including hardware, software, firmware,
and information being processed, stored, and communicated.
This definition introduces three key objectives that are at the heart of computer
security:
• Confidentiality: This term covers two related concepts:
— Data confidentiality : Assures that private or confidential information is
not made available or disclosed to unauthorized individuals.
— Privacy : Assures that individuals control or influence what information
related to them may be collected and stored and by whom and to whom
that information may be disclosed.
• Integrity: This term covers two related concepts:
— Data integrity : Assures that information and programs are changed only
in a specified and authorized manner.
— System integrity : Assures that a system performs its intended function in
an unimpaired manner, free from deliberate or inadvertent unauthorized
manipulation of the system.
• Availability: Assures that systems work promptly and service is not denied to
authorized users.
4
These three concepts form what is often referred to as the CIA triad . The three
concepts embody the fundamental security objectives for both data and for information
and computing services. For example, the NIST standard FIPS 199 (Standards for Security
Categorization of Federal Information and Information Systems , February 2004) lists confidentiality,
integrity, and availability as the three security objectives for information and
for information systems.
Although the use of the CIA triad to define security objectives is well established,
some in the security field feel that additional concepts are needed to present a
complete picture (see Figure 1.1). Two of the most commonly mentioned are as follows:
• Authenticity: The property of being genuine and being able to be verified and
trusted; confidence in the validity of a transmission, a message, or message
originator. This means verifying that users are who they say they are and that
each input arriving at the system came from a trusted source.
• Accountability: The security goal that generates the requirement for actions
of an entity to be traced uniquely to that entity. This supports nonrepudiation,
deterrence, fault isolation, intrusion detection and prevention, and after-action
recovery and legal action. Because truly secure systems are not yet an achievable
goal, we must be able to trace a security breach to a responsible party.
Systems must keep records of their activities to permit later forensic analysis
to trace security breaches or to aid in transaction disputes.
Note that FIPS 199 includes authenticity under integrity.
Key Security Concepts
FIPS 199 provides a useful characterization of these three objectives in terms of requirements
and the definition of a loss of security in each category:
• Confidentiality: Preserving authorized restrictions on information access
and disclosure, including means for protecting personal privacy and proprietary
information. A loss of confidentiality is the unauthorized disclosure of
information.
• Integrity: Guarding against improper information modification or destruction,
including ensuring information non-repudiation and authenticity. A loss of
integrity is the unauthorized modification or destruction of information.
• Availability: Ensuring timely and reliable access to and use of information.
A loss of availability is the disruption of access to or use of information or an
information system.
Although the use of the CIA triad to define security objectives is well established,
some in the security field feel that additional concepts are needed to present
a complete picture. Two of the most commonly mentioned are as follows:
• Authenticity: The property of being genuine and being able to be verified and
trusted; confidence in the validity of a transmission, a message, or message
originator. This means verifying that users are who they say they are and that
each input arriving at the system came from a trusted source.
• Accountability: The security goal that generates the requirement for actions
of an entity to be traced uniquely to that entity. This supports nonrepudiation,
deterrence, fault isolation, intrusion detection and prevention, and after-action
recovery and legal action. Because truly secure systems aren’t yet an achievable
goal, we must be able to trace a security breach to a responsible party. Systems
must keep records of their activities to permit later forensic analysis to trace
security breaches or to aid in transaction disputes.
Note that FIPS 199 includes authenticity under integrity.
5
Confidentiality
Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information
Integrity
Guarding against improper information modification or destruction, including ensuring information nonrepudiation and authenticity
Availability
Ensuring timely and reliable access to and use of information
Levels of Impact
We use three levels of impact on organizations or
individuals should there be a breach of security (i.e., a loss of confidentiality, integrity,
or availability). These levels are defined in FIPS 199:
• Low: The loss could be expected to have a limited adverse effect on organizational
operations, organizational assets, or individuals. A limited adverse effect
means that, for example, the loss of confidentiality, integrity, or availability
might (i) cause a degradation in mission capability to an extent and duration
that the organization is able to perform its primary functions, but the effectiveness
of the functions is noticeably reduced; (ii) result in minor damage to
organizational assets; (iii) result in minor financial loss; or (iv) result in minor
harm to individuals.
• Moderate: The loss could be expected to have a serious adverse effect on
organizational operations, organizational assets, or individuals. A serious
adverse effect means that, for example, the loss might (i) cause a significant
degradation in mission capability to an extent and duration that the organization
is able to perform its primary functions, but the effectiveness of the functions
is significantly reduced; (ii) result in significant damage to organizational
assets; (iii) result in significant financial loss; or (iv) result in significant harm
to individuals that does not involve loss of life or serious, life-threatening
injuries.
• High: The loss could be expected to have a severe or catastrophic adverse
effect on organizational operations, organizational assets, or individuals. A
severe or catastrophic adverse effect means that, for example, the loss might
(i) cause a severe degradation in or loss of mission capability to an extent
and duration that the organization is not able to perform one or more of its
primary functions; (ii) result in major damage to organizational assets; (iii)
result in major financial loss; or (iv) result in severe or catastrophic harm to
individuals involving loss of life or serious life-threatening injuries.
6
Low
The loss could be expected to have a limited adverse effect on organizational operations, organizational assets, or individuals
Moderate
The loss could be expected to have a serious adverse effect on organizational operations, organizational assets, or individuals
High
The loss could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals
Computer Security Challenges
7
Computer security is both fascinating and complex. Some of the reasons follow:
1. Computer security is not as simple as it might first appear to the novice. The
requirements seem to be straightforward; indeed, most of the major requirements
for security services can be given self-explanatory one-word labels:
confidentiality, authentication, nonrepudiation, and integrity. But the mechanisms
used to meet those requirements can be quite complex, and understanding
them may involve rather subtle reasoning.
2. In developing a particular security mechanism or algorithm, one must always consider
potential attacks on those security features. In many cases, successful attacks
are designed by looking at the problem in a completely different way, therefore
exploiting an unexpected weakness in the mechanism.
3. Because of Point 2, the procedures used to provide particular services are often
counterintuitive. Typically, a security mechanism is complex, and it is not obvious
from the statement of a particular requirement that such elaborate measures are
needed. Only when the various aspects of the threat are considered do elaborate
security mechanisms make sense.
4. Having designed various security mechanisms, it is necessary to decide where to
use them. This is true both in terms of physical placement (e.g., at what points in
a network are certain security mechanisms needed) and in a logical sense [e.g.,
at what layer or layers of an architecture such as TCP/IP (Transmission Control
Protocol/Internet Protocol) should mechanisms be placed].
5. Security mechanisms typically involve more than a particular algorithm or
protocol. They also require that participants be in possession of some secret
information (e.g., an encryption key), which raises questions about the creation,
distribution, and protection of that secret information. There may also be a reliance
on communications protocols whose behavior may complicate the task of
developing the security mechanism. For example, if the proper functioning of the
security mechanism requires setting time limits on the transit time of a message
from sender to receiver, then any protocol or network that introduces variable,
unpredictable delays may render such time limits meaningless.
6. Computer security is essentially a battle of wits between a perpetrator who tries
to find holes, and the designer or administrator who tries to close them. The great
advantage that the attacker has is that he or she need only find a single weakness,
while the designer must find and eliminate all weaknesses to achieve perfect
security.
7. There is a natural tendency on the part of users and system managers to perceive
little benefit from security investment until a security failure occurs.
8. Security requires regular, even constant monitoring, and this is difficult in today’s
short-term, overloaded environment.
9. Security is still too often an afterthought to be incorporated into a system after
the design is complete, rather than being an integral part of the design process.
10. Many users and even security administrators view strong security as an impediment
to efficient and user-friendly operation of an information system or use
of information.
1. Computer security is not as simple as it might first appear to the novice
2. In developing a particular security mechanism or algorithm, one must always consider potential attacks on those security features
3. Procedures used to provide particular services are often counterintuitive
4. Physical and logical placement needs to be determined
5. Security mechanisms typically involve more than a particular algorithm or protocol and also require that participants be in possession of some secret information which raises questions about the creation, distribution, and protection of that secret information
6. Attackers only need to find a single weakness, while the designer must find and eliminate all weaknesses to achieve perfect security
9. There is a natural tendency on the part of users and system managers to perceive little benefit from security investment until a security failure occurs
8. Security requires regular and constant monitoring
7. Security is still too often an afterthought to be incorporated into a system after the design is complete, rather than being an integral part of the design process
10. Many users and even security administrators view strong security as an impediment to efficient and user-friendly operation of an information system or use of information
Table 1.1
Computer Security Terminology, from RFC 2828, Internet Security Glossary, May 2000
Adversary (threat agent)
Individual, group, organization, or government that conducts or has the intent to conduct detrimental activities.
Attack
Any kind of malicious activity that attempts to collect, disrupt, deny, degrade, or destroy information system resources or the information itself.
Countermeasure
A device or techniques that has as its objective the impairment of the operational effectiveness of undesirable or adversarial activity, or the prevention of espionage, sabotage, theft, or unauthorized access to or use of sensitive information or information systems.
Risk A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of 1) the adverse impacts that would arise if the circumstance or event occurs; and 2) the likelihood of occurrence.
Security Policy A set of criteria for the provision of security services. It defines and constrains the activities of a data processing facility in order to maintain a condition of security for systems and data.
System Resource (Asset)
A major application, general support system, high impact program, physical plant, mission critical system, personnel, equipment, or a logically related group of systems.
Threat
Any circumstance or event with the potential to adversely impact organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the Nation through an information system via unauthorized access, destruction, disclosure, modification of information, and/or denial of service.
Vulnerability Weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source.
(Table can be found on page 8 in the textbook)
We now introduce some terminology that will be useful throughout the book, relying
on RFC 2828, Internet Security Glossary . Table 1.1 defines terms.
8
9
Figure 1.2, based on [CCPS12a], shows the relationship among some of these terms.
We start with the concept of a system resource , or asset , that users and owners wish to protect.
Assets of a Computer System
The assets of a computer system can be categorized as follows:
• Hardware: Including computer systems and other data processing, data storage,
and data communications devices
• Software: Including the operating system, system utilities, and applications.
• Data: Including files and databases, as well as security-related data, such as
password files.
• Communication facilities and networks: Local and wide area network
communication links, bridges, routers, and so on.
10
Hardware
Software
Data
Communication facilities and networks
Vulnerabilities, Threats and Attacks
Categories of vulnerabilities
Corrupted (loss of integrity)
Leaky (loss of confidentiality)
Unavailable or very slow (loss of availability)
Threats
Capable of exploiting vulnerabilities
Represent potential security harm to an asset
Attacks (threats carried out)
Passive – attempt to learn or make use of information from the system that does not affect system resources
Active – attempt to alter system resources or affect their operation
Insider – initiated by an entity inside the security parameter
Outsider – initiated from outside the perimeter
11
In the context of security, our concern is with the vulnerabilities of system
resources. [NRC02] lists the following general categories of vulnerabilities of a
computer system or network asset:
• The system can be corrupted , so it does the wrong thing or gives wrong answers.
For example, stored data values may differ from what they should be because
they have been improperly modified.
• The system can become leaky . For example, someone who should not have access to
some or all of the information available through the network obtains such
access.
• The system can become unavailable or very slow. That is, using the system or network
becomes impossible or impractical.
These three general types of vulnerability correspond to the concepts of integrity,
confidentiality, and availability, enumerated earlier in this section.
Corresponding to the various types of vulnerabilities to a system resource are
threats that are capable of exploiting those vulnerabilities. A threat represents a
potential security harm to an asset. An attack is a threat that is carried out (threat
action) and, if successful, leads to an undesirable violation of security, or threat
consequence. The agent carrying out the attack is referred to as an attacker, or
threat agent . We can distinguish two types of attacks:
• Active attack: An attempt to alter system resources or affect their operation.
• Passive attack: An attempt to learn or make use of information from the
system that does not affect system resources.
We can also classify attacks based on the origin of the attack:
• Inside attack: Initiated by an entity inside the security perimeter (an “insider”).
The insider is authorized to access system resources but uses them in a way not
approved by those who granted the authorization.
• Outside attack: Initiated from outside the perimeter, by an unauthorized or
illegitimate user of the system (an “outsider”). On the Internet, potential
outside attackers range from amateur pranksters to organized criminals, international
terrorists, and hostile governments.
Countermeasures
12
Finally, a countermeasure is any means taken to deal with a security attack.
Ideally, a countermeasure can be devised to prevent a particular type of attack from
succeeding. When prevention is not possible, or fails in some instance, the goal is to
detect the attack and then recover from the effects of the attack. A countermeasure
may itself introduce new vulnerabilities. In any case, residual vulnerabilities
may remain after the imposition of countermeasures. Such vulnerabilities may be
exploited by threat agents representing a residual level of risk to the assets. Owners
will seek to minimize that risk given other constraints.
Means used to deal with security attacks
Prevent
Detect
Recover
May itself introduce new vulnerabilities
Residual vulnerabilities may remain
Goal is to minimize residual level of risk to the assets
**Table is on page 10 in the textbook.
Table 1.2
Threat Consequences,
and the
Types of
Threat Actions
That Cause
Each
Consequence
Based on
RFC 4949
13
Table 1.2 , based on RFC 4949, describes four kinds of threat consequences and lists
the kinds of attacks that result in each consequence.
Unauthorized disclosure is a threat to confidentiality. The following types of
attacks can result in this threat consequence:
• Exposure: This can be deliberate, as when an insider intentionally releases
sensitive information, such as credit card numbers, to an outsider. It can also
be the result of a human, hardware, or software error, which results in an entity
gaining unauthorized knowledge of sensitive data. There have been numerous
instances of this, such as universities accidentally posting student confidential
information on the Web.
• Interception: Interception is a common attack in the context of communications.
On a shared local area network (LAN), such as a wireless LAN or a
broadcast Ethernet, any device attached to the LAN can receive a copy of
packets intended for another device. On the Internet, a determined hacker
can gain access to e-mail traffic and other data transfers. All of these situations
create the potential for unauthorized access to data.
• Inference: An example of inference is known as traffic analysis, in which an
adversary is able to gain information from observing the pattern of traffic on
a network, such as the amount of traffic between particular pairs of hosts on
the network. Another example is the inference of detailed information from
a database by a user who has only limited access; this is accomplished by
repeated queries whose combined results enable inference.
• Intrusion: An example of intrusion is an adversary gaining unauthorized
access to sensitive data by overcoming the system’s access control protections.
Deception is a threat to either system integrity or data integrity. The following
types of attacks can result in this threat consequence:
• Masquerade: One example of masquerade is an attempt by an unauthorized
user to gain access to a system by posing as an authorized user; this could
happen if the unauthorized user has learned another user’s logon ID and
password. Another example is malicious logic, such as a Trojan horse, that
appears to perform a useful or desirable function but actually gains unauthorized
access to system resources or tricks a user into executing other malicious
logic.
• Falsification: This refers to the altering or replacing of valid data or the introduction
of false data into a file or database. For example, a student may alter
his or her grades on a school database.
• Repudiation: In this case, a user either denies sending data or a user denies
receiving or possessing the data.
Disruption is a threat to availability or system integrity. The following types of
attacks can result in this threat consequence:
• Incapacitation: This is an attack on system availability. This could occur as a
result of physical destruction of or damage to system hardware. More typically,
malicious software, such as Trojan horses, viruses, or worms, could operate in
such a way as to disable a system or some of its services.
• Corruption: This is an attack on system integrity. Malicious software in this
context could operate in such a way that system resources or services function
in an unintended manner. Or a user could gain unauthorized access to a system
and modify some of its functions. An example of the latter is a user placing
backdoor logic in the system to provide subsequent access to a system and its
resources by other than the usual procedure.
Obstruction: One way to obstruct system operation is to interfere with communications
by disabling communication links or altering communication
control information. Another way is to overload the system by placing excess
burden on communication traffic or processing resources.
Usurpation is a threat to system integrity. The following types of attacks can
result in this threat consequence:
• Misappropriation: This can include theft of service. An example is a distributed
denial of service attack, when malicious software is installed on a number of hosts
to be used as platforms to launch traffic at a target host. In this case, the malicious
software makes unauthorized use of processor and operating system resources.
• Misuse: Misuse can occur by means of either malicious logic or a hacker that
has gained unauthorized access to a system. In either case, security functions
can be disabled or thwarted.