7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 1/20
CHAPTER
9 User Domain Policies
ATENET OF TELECOMMUNICATIONS SAYS the more people who access a network, the more valuable the network becomes. This is called
Metcalfe’s law. Consider a telephone system as an example. If only two
telephones were on the system, the value of the system is limited. Only
two people can talk at any given time. But add millions of phones and
people, and suddenly the value of the network rapidly increases.
This same principle can also be applied to the introduction of technology.
As new technologies introduce new capabilities, the value of the network
increases yet again. However, it’s also true that the more users and
technology involved in a network, the more complex it becomes, and the
more potential security risks are introduced.
To illustrate these points, consider what happens when you bring home a
new laptop. Typically, a new computer has a new installation of the
operating system, pre-loaded applications, and games. The number of
users is one, you. The security risks are low. Then you add technology
such as an Internet connection, new social media software, and more
users, such as family and friends. The laptop now becomes far more
valuable. However, the value comes at a cost of increased security risks.
This increase in the number of people accessing your network, along with
the introduction of new and emerging technology (such as mobile devices)
has dramatically increased the number of security risks. As the user
population and the diversity of technology increase, so does the need to
access information. This need translates into complex security controls
that must be maintained. Inevitably, this complex jumble of controls leads
to gaps in protection and security risks.
This chapter examines different types of users on networks. It reviews
individual need for access and how those needs lead to risks that must be
controlled. We will also discuss how security policies mitigate risks in the
User Domain. The last part of the chapter presents case studies to
illustrate the alignment between types of users, risks, and security
policies.
Chapter 9 Topics
This chapter covers the following topics and concepts:
• What the weakest link in the information security chain is
• What different types of users there are
• How to govern different types of users with policies
• What acceptable use policies (AUPs) are
• What the significance of a privileged-level access agreement
(PAA) is
• What security awareness policies (SAPs) are
• What best practices for User Domain policies are
PREV Chapter 8 IT Security Policy Framework Approaches⏮
NEXT Chapter 10 IT Infrastructure Security Policies ⏭🔎
Security Policies and Implementation Issues, 2nd Edition
Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/home/
https://learning.oreilly.com/resource-centers/
https://learning.oreilly.com/playlists/
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/17_ch8.xhtml
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/19_ch10.xhtml
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 2/20
• What the difference between least access privileges and best fit
access privileges is
• What some case studies and examples of User Domain policies
are
Chapter 9 Goals
When you complete this chapter, you will be able to:
• Understand why users are considered the weakest link in
implementing security policies and controls
• Understand the different users in a typical organization
• Explain how different users have different information needs
• Define an AUP
• Define a PAA
• Explain how a SAP can reduce risks
• Explain the importance of risk acceptance in understanding
security risks
• Identify several best practices related to User Domain policies
• Understand through case studies how security policies can reduce
risk
The Weakest Link in the Information Security Chain
Security experts consider people the weakest link in security. Unlike
automated security controls, different people have different skill levels.
People can also let their guard down. They get tired or distracted, and
may not have information security in mind when they do their jobs.
Automated controls have advantages over people. An automated control
never sleeps or takes a vacation. An automated control can work
relentlessly and execute flawlessly. The major advantage people have over
automated controls is the ability to deal with the unexpected. An
automated control is limited because it can mitigate only risks that it has
been designed for.
This section looks at different ways in which humans earn the distinction
of “the weakest link in the security chain.” As you’ll learn, social
engineering, human mistakes, and the actions of insiders account for
many security violations. However, lack of leadership support for security
policies is another reason security measures fail. As a future security
leader, keep in mind why employees at every level must accept and follow
security policies.
Social Engineering
People can be manipulated. Social engineering occurs when you
manipulate or trick a person into weakening the security of an
organization. Social engineering comes in many forms. One form is
simply having a hacker befriend an employee. The more intimate the
relationship, the more likely the employee may reveal knowledge that can
be used to compromise security.
Another method is pretending to be from the IT department. This is
sometimes called pretexting. A hacker might call an employee and convince him or her to reveal sensitive information. For example, a hacker
asks an employee to enter data the hacker knows won’t work. The hacker
then simply asks for the employee’s ID and password to “give it a try.”
Hackers who use pretexting are usually highly skilled in manipulating
people. They can present simple or elaborate stories that seem compelling
to an unsuspecting employee.
NOTE
There are many different techniques for social engineering. However, they all rely on a person revealing sensitive information. To be successful, they typically require the attacker to get one or more employees to violate company policy. That’s why security awareness training programs should address social engineering.
Another technique is to ask an employee to link to an internal Web page
to verify the network performance. On that internal Web page, the user is
then prompted to enter an ID and password and provide some random
number noting that the response time on the network is good. What the
user doesn’t realize is that the internal Web page is a fake that has just
captured the user’s ID and password. As the methods and sophistications
of hackers improve, so must the awareness training for the users.Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 3/20
Social engineering accounted for 29 percent of data breaches in 2013,
according to a report published in 2014 by Verizon. Social engineering is
attractive because of the ease with which data can be obtained compared
with hacking. Breaking through automated controls like a firewall can
take weeks, months, or years. Hackers may never be able to bypass the
controls of a well-protected network. If they do, they still might not get
access to the information they want. Breaking through a firewall does not
necessarily provide access to data on a protected server. And even if
hackers access data, they might not be able to send it outside the network.
The bottom line for a hacker is that it may be easier to call employees and
pose as an IT department employee. This can be accomplished within a
short time and takes only one individual letting his or her guard down to
succeed.
Human Mistakes
One characteristic all humans share is that they all make mistakes.
Mistakes come from carelessness, fatigue, lack of knowledge, or
inadequate oversight or training. Humans may perceive a security threat
that does not exist. And someone might miss a real threat that is obvious
to an objective observer.
NOTE
A survey conducted by Help Net Security found that employee carelessness is ranked fourth in the top 10 information security threats of 2010.
Carelessness can be as simple as writing your password on a sticky note
and leaving it on your keyboard. It can also be failing to read warning
messages but still clicking OK. Carelessness can occur because an
employee is untrained or does not perceive information security as
important. Careless employees are prime targets of hackers who develop
malicious code. These hackers count on individuals to be their point of
entry into the network.
Another form of carelessness is intimidating people into weakening
security controls out of convenience. This can happen when a supervisor
or an executive, for example, asks an employee to take shortcuts or to
bypass normal control procedures. The employee feels compelled to
follow the instructions of his or her superior.
NOTE
When employees feel compelled by management to violate their organization’s own established security policies and depart from normal processes, that’s a strong indication of the lack of a good risk culture within the organization. Neither employees nor managers have truly “bought in” to the importance of managing security risk, in other words.
Carelessness can also be a result of a lack of common computer
knowledge. Technology often outpaces an employee’s skills. Just as some
employees acquire solid understanding of a system or application, it’s
upgraded or replaced. Too much change in an organization is unsettling
and can lead to portions of your workforce being inadequately trained. An
untrained worker can create a security weakness inadvertently, such as by
failing to log off a system and leaving information on the screen exposed.
Programmers can also make mistakes. This is particularly a concern when
those programmers introduce a coding error into a product with millions
of users. That’s exactly what happened with Open SSL, an open source
product used by millions to encrypt Internet traffic. In early 2011 code
updates were made to Open SSL that potentially allowed a hacker to read
encrypted traffic by obtaining the secret encryption key. The bug was
named “Heartbleed.” Why is this important? The ability of a hacker to
read encrypted messages on the Internet fundamentally undermines trust
needed to conduct business over the Internet. The potential risk is that a
hacker can suddenly see IDs, passwords, and the content of messages,
such as credit card information. Fortunately, the fix in this case was easy
to make. However, the fact such a bug was introduced, affecting
potentially millions of users, was a wake-up call. This was not a small
event. The sites affected included Google, Yahoo, Facebook, and Netflix,
and many others.
FYI
Heartbleed is a security bug in OpenSSL discovered in 2014. OpenSSL
allows Web sites to encrypt information from visitors so the data
transferred back and forth (including usernames, passwords, and
cookies) cannot be seen by others. When you access a Web site withFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 4/20
OpenSSL, the site responds and actively listens for more input. This is
known as the heartbeat routine. Normally when the heartbeat routine
receives input, the Web site sends back only the amount of data your
computer sent. The Heartbleed bug exploits a coding error that allows
a hacker to make a request for the server’s memory, which includes
nonencrypted data such as login credentials. “Heartbleed” is a play on
words referring to this coding bug that allows memory bleed in the
heartbeat code.
You can use security policies to help developers reduce vulnerabilities
during application development. Security polices can establish secure
coding standards. The policies require penetration testing for high-risk
applications. The best time to reduce risk is when an application is being
written. Security policies can define how you perform vulnerability
reviews during the development life cycle. Collectively, these polices can
help you protect an application against attack.
Insiders
A significant threat to information security comes from the user who is an
insider. The same OTA report mentioned previously also noted that 31
percent of data breaches in 2013 were due to inside threats. The term
insider refers to an employee, consultant, contractor, or vendor. The insider may even be the IT technical people who designed the system,
application, or security that is being hacked. The insider knows the
organization and the applications. An IT department insider knows what
is logged and what is checked and not checked. This person may even
have access to local accounts shared between administrators. As a result,
the IT insider has an easier time bypassing security controls and hiding
his or her tracks. Insiders can hide their tracks by deleting or altering logs
and time stamps. Knowing where the logs are kept and how frequently
they are checked is a great advantage to an insider.
Application Code Errors
There are differing views on the number of average errors per line of
code written. Some general rules of thumb use 10 to 20 defects per
1,000 lines of code while others estimate as many as 50 defects per
1,000. Commercial software tends to have fewer errors than code
written in-house—as low as .5 to 3 defects per KLOC. A KLOC is a
unit of measure that stands for 1,000 lines of code. For example:
• An application with 2 million lines of code and a rate of 20 defects
per KLOC would expect to have 40,000 coding errors.
• This is calculated based on 20 (defects per KLOC) multiplied by 2
million (lines of code) divided by 1,000 (the KLOC).
Thousands of new vulnerabilities are discovered in code each year.
The number of new vulnerabilities recorded in 2009 by IBM was
6,601. You can safely assume that the vulnerabilities for new
products are not found immediately. The new vulnerabilities
discovered each year are a combination of errors in new and existing
systems and applications.
Regular employees with a long history in the organization may also pose a
risk. These employees may be in a position of trust. These individuals
have a sense of how the organization responds to incidents and can tailor
their attack accordingly.
Insiders are not limited to regular employees; they can also be vendors
and suppliers. Suppose, for example, you work for a financial institution.
Suppose it has outsourced the processing of loan applications to India.
The workers there have access to detailed confidential financial
information of applicants. In March 2012 an undercover reporter in India
was able to buy confidential information, including names, addresses,
credit card account numbers, and CCV/CVV numbers (“card security
codes”). The reporter obtained everything needed for credit card fraud, in
other words. The cost of this information? In some cases just 3 cents per
name. It may not seem like a lot of money, but consider salaries in
overseas locations such as India and the Philippines. Companies have
outsourcing arrangements in those countries because of lower labor costs
there. According to JobStreet.com (http://JobStreet.com), an average call
center agent in the Philippines with four years’ experience makes an
average of $338 per month, or $4,056 per year. It’s easy to see how
offering a few months’ worth of salary for information can be enticing.
NOTE
The motivation of an insider is not always greed. An individual may feel disgruntled for a variety of reasons—from feeling mistreated to being passed over for some promotion. The person may have some disappointment in life outside ofFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
http://jobstreet.com/
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 5/20
work. The person may simply have a sense of entitlement, “taking” the rewards he or she feels have been earned.
One motivation is money. Consider someone trying to steal 100,000
credit card numbers. Some estimate stolen credit cards can be sold for $2
to $6 each on the black market. Assume a hacker offers $20,000 to an
employee for insider help. The employee copies the card information or
provides a way to get into the system. Paying for information becomes a
more economical approach than taking the time to hack through
automated defenses with uncertain results. The return in this example
would be $200,000 to $600,000 for a $20,000 investment.
Insiders breaching security can have a devastating effect on an
organization’s reputation and viability. For example, Jerome Kerviel was a
trader at a major European bank. He was blamed for losing $7.9 billion.
Kerviel was an insider who placed unauthorized trades, putting the bank
at serious risk. He covered up the trades by falsifying records and hacking
into company computers to hide the trades. This reportedly went on for
almost two years until he was caught in 2008.
Security policies and controls can help limit damages and threats.
Security policies ensure access is limited to individual roles and
responsibilities. This means the damage from using an insider’s
credentials is limited to that function. Additionally, a policy may require
that an individual’s access be removed immediately upon leaving the
organization. These types of user controls can reduce risk.
Seven Types of Users
The User Domain, one of seven domains of a typical IT infrastructure,
consists of a variety of users. Each user type has unique access needs. As
the different types of users in the domain grow, so does the security
complexity. At a minimum, each type of user has unique business needs
and thus requires unique rights to access certain information. Within each
of these major types of users, the rights are further refined into subtypes.
Each subtype might be further broken up, and so on. For example, your
organization might have many types of administrators. The number
depends on the size of the organization, complexity, and team
specializations. You may further separate rights between Oracle and
Microsoft SQL database administrators. Figure 9-1 is an example of types
and subtypes of users.
You can build better security policies and controls by understanding user
needs. There is no fixed number of user types possible on a network. For
example, a salaried employee may be full-time experienced professional,
or a part-time college student. Depending on the business, though, there
may be different sets of security issues associated with those two types of
employee. To illustrate common user needs, this chapter focuses on seven
basic user types, as follows:
• Employees—Salaried or hourly staff members of the organization
• Systems administrators—Employees who work in the IT department to provide technical support to the systems
• Security personnel—Individuals responsible for designing and implementing a security program within an organization
FIGURE 9-1 Types of users.
• Contractors—Temporary workers who can be assigned to any role; contractors are directly managed by the company in the same manner as
employees.
• Vendors—These are outside companies, or individuals working for such companies, hired to provide ongoing services to the organization,
such as building cleaning. Unlike contractors, vendor employees are
directly managed by the vendor company to perform specific services on
the organization’s network.
• Guests and general public—A class or group of users who access a specific set of applications
• Control partners—Individuals who evaluate controls for design and effectiveness
Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 6/20
In addition to these (human) user types, all with different access needs,
you should also be aware of two other groups. They are really account
types, rather than user types. System accounts are non-human accounts
used by the system to support automated service. Contingent IDs are non-
human accounts until they are assigned to individuals who use them to
recover a system in the event of a major outage.
FYI
Contingent accounts, or contingent IDs, are interesting type of account
because they do not truly become user accounts until they are assigned
to an individual. That may not happen until a disaster occurs.
However, some contingent IDs will be preassigned to individuals,
making them a type of user from conception. The point to remember
here is that at some point, contingent IDs become a type of user
account and must be managed appropriately.
Table 9-1 outlines each of these user types in context of their business and
access needs. The table focuses on nine basic user or account types. The
same approach can be applied to any user accessing information on the
network.
TABLE 9-1 Access needs of typical domain users and account types.
TYPE OF USER
BUSINESS NEED
ACCESS NEED
Employees Need to
access
specific
applications
in the
production
environment
Access is limited to
specific applications
and information.
Systems
administrators
Need to
access
systems and
databases to
support
applications
Access is broad and
unlimited in context
of the role. For
example, database
administrators may
have unlimited
access to the
database but not the
operating system.
Security
personnel
Need to
protect
network,
systems,
applications,
and
information
Access to set
permissions, review
logs, monitor
activity, and respond
to incidents.
Contractors Temporary
worker
needing the
same access
as a full-time
worker in the
same role
Access is the same as
for full-time worker.
Vendors Need to
access
network,
systems, and
application to
perform
contracted
services.
Access is limited to
specific portions of
the network,
systems, and
applications.
Guests and
general public
Need to
access
specific
application
functions
Access is assigned to
a type of user and
not to the individual.
Control
partners
Need to
review and
assess
controls
Access often
includes unlimited
read access to logs
and configuration
settings.
Contingent
IDs
Need to
recover
systems and
data during
an outage
Access is unlimited
across both
operating systems
and databases.
Additionally, may
also require broadFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 7/20
access to network
devices (such as
firewalls) and data
backups.
System
accounts
Need to start,
stop, and
perform
automated
system
services.
Access should be
limited to the system
function being
performed.
User IDs must be managed so that you know who had access to the
account when it was used. Suppose a large amount of credit card data was
accessed and later found to have been used to commit fraud. Now
suppose the log indicates which user ID accessed and stole the data. It
would be helpful to know who was assigned to that ID. However, suppose
a hundred or even a thousand individuals had access to the user ID. It
might be impossible to find out who stole the credit card information.
When user IDs are assigned, reassigned, or deleted, records are typically
kept. This is sometimes referred to as a chain of custody. A chain of custody for a user ID typically refers to knowing at any given point in time
who had access to the user ID. Often chain of custody is enforced simply
by resetting the password. By resetting the password and giving the new
password to a different individual, the ID can be reassigned. This is useful
when dealing with temporary access such as training ID or emergency
access ID.
Employees
Employees represent the broadest category of users within an
organization. Organizations are composed of departments and lines of
business. An employee may be full-time or part-time. An employee may
be in a customer-facing role or a corporate function. Regardless of their
job in the organization, employees have unique information needs.
TIP
Before allowing employees to access information, be sure they understand their security responsibilities. Often organizations require formal information security awareness training before an ID is issued.
Successfully implementing security policies depends on knowing who has
access to the organization’s information. Security policies require users to
have unique identities to access systems, applications, and/or networks.
This is typically accomplished by the employee entering a unique user ID
and password.
Employees’ access must be managed through the life cycle of their career
with the organization. There is always pressure to grant and extend user
access to increase productivity. No one wants to wait weeks for a new hire
to be granted access. Additionally, when a change to the business occurs,
you might need to change employee access. Although there’s significant
pressure to grant employees new access rights, the same pressure may not
exist to remove access. Consider the following example:
An employee with many years of experience within an organization
worked her way to a role with a high level of trust with her management.
She entered the organization at an entry-level position. She was
eventually promoted to the role of supervisor and then manager. The
employee transferred within the department. Throughout the changes in
her role, the prior access was never removed. This is someone who
understands the inner workings of the department and has intimate
knowledge of the technology. She is often asked to train others.
Security policies require access to be removed when an individual changes
roles. Without good security policies you may find longtime employees
with excessive access rights. They collect new access as they change roles
and continue to retain access from their prior roles. Department
leadership might not perceive this as a problem, especially when an
employee uses this broad access to “save the day” during a crisis. Looking
ahead, people may believe there’s no time to ask for additional access
during an emergency. An individual who is able to execute transitions
quickly might prevent the problem from escalating.
WARNING
As individuals move from job to job within an organization, their access privileges from previous jobs must be removed. If this is not done, the result may be what is sometimes
Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 8/20
referred to as “privilege creep.” This, it turn, may make it possible for someone to commit fraud, because there is no longer a separation of duties between jobs such as executing and approving one’s own transaction.
Excessive access rights represent a serious security risk. As individuals
change roles, their access rights must be adjusted. Prior access rights that
are no longer needed must be removed. New access rights must be
properly approved and granted. This is for the employee’s protection as
much as for the organization’s. When a security incident occurs, one of
the first steps is to identify who may have had access. This is
accomplished by reviewing individual access rights. Employees can avoid
suspicion if they have no access to the affected systems, applications, and
information. Removing unneeded access also reduces overall security
vulnerabilities. In the event an ID and password is compromised, a
hacker’s access rights would be contained within the employee’s current
role. Consider the following example:
In a bank, a teller may be able to initiate the process of sending money
between banks from one account to another. This is an important service
provided to customers. Before the money is sent, however, the bank
manager must approve the transfer. This dual control creates a separation
of duties (SOD) to reduce fraud. If the manager was once a teller and
retained his access rights, the bank is at risk. The manager in this scenario
could start and approve the transfer of money. The ability to perform both
roles violates the SOD security policies for these types of transactions.
Additionally, having such access becomes an unnecessary temptation for
fraud in which employees could target rarely used accounts to wire
themselves funds.
Good security policies make clear that individuals have only the access
needed for their jobs. Security policies outline how rights are assigned and
approved. This includes the removal of prior access that is no longer
needed. This accomplishes the following:
• Reduces the overall security risk to the organization
• Maintains separation of duties
• Simplifies investigation of incidents
Systems Administrators
Systems administrators may need unlimited rights to install, configure, and repair systems. With this elevated access comes enormous
responsibility to protect credentials. A systems administrator’s credentials
are a prime target for hackers. As a result, organizations should consider
additional layers of authentication for administrators when feasible, such
as certificates and two-factor authentication.
Security policies reduce risk by requiring monitoring of the systems
administrator’s activity. The systems administrator should only use broad
access to perform assigned duties. Let’s consider a database
administrator. She needs access to apply patches, resolve issues, and
configure applications. Yet she normally does not access customer
personal information stored within the database. Logging administrator
activity is one way to verify that access rights are not being abused. Logs
record if administrators granted themselves access beyond the scope of
their roles. Logs record the names of people who access customer
personal information. Although you may not be able to prevent a systems
administrator from accessing customer information, you can review the
logs to detect the event.
With elevated access, systems administrators could just turn off the logs.
However, the act of turning off or altering logs is also trackable. Many
systems write an entry when the log service starts and stops. Additionally,
logs can be sent to a log server. A log server is a separate platform used to collect logs from platforms throughout the network. Access to log
servers is highly restricted. Analyzing logs can help you detect gaps in
logs, which are an indication the log service was turned off. Analyzing logs
can also help detect if they have been altered. Knowing that your activity
is being monitored is a deterrent in itself. Security policies outline the
requirements of what is logged and how often the logs are reviewed.
There’s a widely accepted approach that states systems administrators’
access rights should be limited to their daily routine tasks. Through a
separate process, systems administrator’s rights can be elevated when
they need to install, configure, or upgrade the system. The approach
assumes that tasks associated with the elevated rights occur infrequently.
Therefore, the additional process is not burdensome. Some systems
administrators resist this approach. Having unfettered access makes their
job easier in that they don’t have to request access before performing
certain tasks. It can be hard to predict what their access needs are. As a
result, they may feel asking for permission is cumbersome and creates
unnecessary delays.
Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 9/20
You can consider limiting administrator rights a leading practice in
regulated industries. The approach is widely accepted in the financial
services industry. Some of the advantages of granting elevated rights to
administrators as needed include the following:
1. It reduces the overall security risk to the organization. In the event the systems administrator’s credentials are compromised access would be
limited.
2. It dramatically reduces the volume of logs to be reviewed to detect when an administrator abuses his or her access rights.
3. It improves the alignment and understanding between technical tasks and business requirements.
a. This approach records the business reason for the elevated rights being
granted, which addresses why the security administrator accessed certain
files.
b. This information can also be used to identify patterns of control
weaknesses.
It’s not practical to log every access a busy systems administrator
performs daily. The volume of logs would be excessive. Any review of
these logs would lack context. It may be possible to see that an
administrator accessed a file but there’s no business context for the
action. In other words, given the volume of log files, it would difficult at
best to determine which files the person should or should not have
accessed that day. The approach described above allows you to
understand why the access rights were used. By the nature of the request,
you know what files the administrator should access and the business
reason why. For example, if you found that a security administrator had
accessed a financial spreadsheet, was it to fix a corrupted file, or because
he or she wanted to illegally access information used to buy or sell stock?
When an administrator is fixing a problem, you have a record of the
reason why he or she accessed the file. Knowing the business reason gives
you the context.
The process for capturing business requirements and elevating privileges
are well established. Security policies outline the process of temporarily
granting elevated rights, which is often called a firecall-ID process. A
firecall-ID process provides temporary elevated access to unprivileged users. The name implies the urgency behind granting the access to resolve
a problem quickly. During a firecall-ID process, the issue or problem is
defined in a trouble ticket. The trouble ticket is a complete record of what access was granted and the business reason. The ticket is then
assigned to someone to fix the problem. When the problem is assigned,
the individual is granted elevated privileges. The individual completes the
work and closes the ticket. When the ticket closes, the individual’s
elevated rights are removed. Figure 9-2 depicts a basic firecall-ID process.
NOTE
A firecall-ID process, along with trouble tickets, can be an important source of information that can be used to detect patterns of problems. Access to this information is important to provide ongoing improvements in the system and application designs.
A firecall-ID process is an accepted way to grant temporary access for a
number of activities, such as one-time events like special financial
reporting. With this approach, you configure more detailed logging
without generating excessive volume. This is because you will record more
detail but only when the elevated rights are turned on.
A number of variations to the firecall-ID process are illustrated in Figure
9-2. For example, requiring a help desk manager to approve a ticket may
be an optional control that some organizations feel is unnecessary.
Another common variation is to allow an administrator to open a trouble
ticket, make a repair, and then close a trouble ticket.
FIGURE 9-2 Basic firecall-ID process.Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 10/20
This self-service capability may be important when an administrator
wants to track unusual events. For example, suppose a database
administrator, or DBA, has a very stable environment, with few outages
and problems. To control the risk, the DBA’s normal access may not
include unlimited access to the database. However, when the database
system administrator (DBSA) identifies a problem, he or she may choose
to use the firecall process to gain the additional access quickly to repair
the database and note the problem.
Security Personnel
Security personnel are responsible for designing, implementing, and monitoring security programs. In larger organizations, the roles may be
separated between those who define and those who implement the
policies. Security personnel develop security awareness and training
programs. They also align security policies with those of other parts of the
organization such as legal and HR.
Security staff must understand and implement different types of controls,
such as management, operational, and technical. They have to wear many
hats. On any given day, security staff may handle a variety of tasks. One
day they may work with procurement to review a new software package
for vulnerabilities. They may be woken up in the middle of the night to
respond to a security breach. In many organizations, the security team is
understaffed, so they must carefully prioritize the work load to focus on
the greatest risks. The following are examples of the diversity of issues
that security teams deal with:
• Audit coordination and response, and regulator liaison
• Physical security and building operations
• Disaster recovery and contingency planning
• Procurement of new technologies, vendor management, and
outsourcing
• Security awareness training and security program maintenance
• Personnel issues, such as background checks for potential employees
and disciplinary actions for current employees
• Risk management and planning
• Systems management and reporting
• Telecommunications
• Penetration testing
• Help desk incident response
Security staff roles and their associated access must be well defined. This
includes limiting access to specific duties and, as appropriate, leveraging
the firecall-ID process to gain elevated privileges. With this broad access
come enormous responsibilities to protect credentials. The credentials of
these individuals are also prime targets for hackers.
Contractors
Contractors are temporary workers. They can be assigned roles like regular employees. The two major advantages of a contractor are in cost
and skills. These individuals comply with the same security policies as any
other employee. There may be additional policy requirements on a
contractor such as special nondisclosure agreement and deeper
background checks.
As short-term employees, contractors may not show the same loyalty to
the organization as a long-term employee. Many security experts consider
contractors a higher security risk than an employee. This is because the
organization often hires these individuals from consulting firms. Thus the
organization does not have full control over the consulting firms’ hiring
practices or full access to their contractors’ job histories or performance
reviews.
Contractors allow you to ramp up your workforce during peak periods.
Contractors generally save organizations money over the long term.
Although you pay contractors more than similar employees’ wages, you
usually need contractors for shorter periods of time. In addition,
contractors generally do not receive paid benefits, such as sick leave and
vacation time.
Contractors can bring a variety of special skills to an organization. These
skills can be valuable to a specific project or initiative. Maintaining these
skills within your full-time staff may not be cost effective. For example,
assume you are deploying a new technology to prevent data leakage. The
project is to install a leading vendor package designed to prevent sensitive
information from being e-mailed out of the organization. Your staff may
be unfamiliar with the package and the technology. You can hire a
contractor who has installed this product numerous times. The contractorFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 11/20
has knowledge based on prior installations that cannot be achieved
through training.
Contractors must be fast learners. Within a short time, they are expected
to know your security policies. They also have to adapt to the organization
culture. The firm placing the individual often completes background
checks for contractors. If that is the case, it’s important to verify that the
background checks are as thorough as those performed by the hiring
organization. The other challenge relates to security awareness.
Depending on the length of the engagement, there may be limited time to
conduct the same caliber of awareness training as you do with existing
employees.
Vendors
Vendor employees need to be managed the same as salaried employees.
Their access must be tied to their individual roles. Vendor employees
must follow all the same rules and policies as an organization’s own
employees. A vendor employee may be full-time or part-time. He or she
may be in a customer-facing role or a corporate function. Regardless of
their jobs in the organization, vendor employees have the same kinds of
information needs as an organization’s employees.
However, vendor employees are managed directly by the vendor
company. Consequently, that company will often manage their access.
This adds both complexity and risk. Processes must be in place to ensure
that the vendor company is managing its employees effectively. This
includes notifying the organization when staffing changes require access
changes. Here are some situations for which a vendor must provide
notification:
1. When individuals are hired or terminated.
2. When individuals change their roles.
3. When systems are added to or removed from the organization’s network.
4. When security configuration changes are made to the communications between the vendor and the organization, such as firewall rule changes.
A vendor can significantly impact the security readiness of an
organization. An organization is only as secure as the vendor systems
connected to the organization’s own network.
Guests and General Public
Guests and the general public are a special class of users. Unlike other
types of users who are assigned unique IDs and passwords, you might not
know the identity of an individual accessing a public-facing Web page.
This is common on the Internet. There are many applications on the
Internet that are freely accessible to the public. When an individual wants
access to one of these applications, an ID and password is not needed.
For example, let’s assume a Web site contains a Zip code lookup
application in the demilitarized zone (DMZ). You enter a Zip code to find
out which city is in the Zip code area. Assume the Web site is freely
available. The cost of the site is supported by advertisers placing ads on
the Web page. When someone keys in a Zip code, the corresponding city
name appears. This is accomplished through a query to a back-end
database that matches the entered Zip code with the appropriate city.
Credentials are exchanged between the Web site server and back-end
database server. Rather than seeing an individual accessing the database,
the security controls may only see the credentials of the Web site server.
This in itself does not create a security exposure if the application,
network, and database are hardened.
NOTE
To harden means to eliminate as many security risks as possible. You do this by reducing access rights to the minimum needed to perform any task, ensuring access is authenticated to unique individuals, removing all nonessential software, and taking other configuration steps that eliminate opportunities for unauthorized access.
The Zip code lookup application ensures that only a five-digit Zip code
can be entered, which prevents a Structured Query Language (SQL) injection attack. SQL injection is a common form of hacker attack in which a SQL command is placed inside an input field. Hackers hope that
when the input field is passed to the database query, they can execute
their own commands on the database. Network controls ensure that only
traffic from the DMZ application to the database server is permitted.
These network controls (such as a firewall) would also ensure that the
only traffic permitted is a SQL query from the DMZ application to the
specific back-end database server. The back-end database server accepts aFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 12/20
connection only from the DMZ application. The back-end database server
also permits only one type of SQL query, which reads the Zip code entered
and returns the associated city to be displayed. Figure 9-3 depicts these
layers of controls at the DMZ application, network, and database layers.
In this specific Web site example, you can see that the application,
network, and database would be well protected. It’s not so easy in the real
world. In the real world, applications share Web site space and back-end
database servers. A breach in one application can lead to a breach in
another. Not all applications effectively test and limit what a user can
enter. An application’s internal controls can be sound but the application
becomes compromised by a vulnerability in the operating system. Security
policies outline the type of controls and hardening methods used to
protect a server in the DMZ.
Public-facing Internet sites are prime targets for hackers. Hackers may
sign up for a legitimate account on your Web site. Then, using that
account, they may try to find ways to expand its authority or gain access to
other customers’ information. To protect against this, keep the number of
system accounts used in the application to a minimum. As much as
possible, the DMZ should be used simply to capture and clean customer
input and pass the user content to a back-end system. At a high level,
think of the DMZ as having two main purposes: cleansing user input and
manning secure communications. It’s the back-end system behind the
firewall that performs additional content verification and the actual
processing of the transaction.
FIGURE 9-3 Example of a DMZ application connecting to back-end server.
When dealing with guests and the general public, you grant access rights
to a class of users rather than individuals. It’s important to remember that
guests and the general public have different skill sets than your employees
do. These individuals have not had the benefit of security awareness
training, either. Assigning guest and public access can be accomplished in
several ways. You can assign credentials to applications, servers, or types
of database connections. You can also assign rights to a generic user ID or
service account. Assigning credentials in the form of a hard-coded ID and
password stored within an application is less secure. Security policies
typically prohibit this approach to application credentialing. The problem
is that if an application is compromised, the ID is also compromised. It
also makes it very difficult to change the ID’s password without recoding
or reconfiguring the application. As a result, a password used in this
manner tends not to change very often. This creates a security
vulnerability. A much better method of assigning credentials is using a
method that does not rely on passwords, such as assigning a certificate.
Assigning a certificate to an account, application, or server is fairly easy.
The complexity and cost comes in setting up the environment to maintain
the certificates.
The following are some best practices when dealing with guest and
general public access:
• From a policy standpoint, it is important to have a well-defined risk
process that performs a detailed assessment of guest and general public
access
• Highly restrict access to specific functions
• Penetration test all public-facing Web sites to detect control
weaknesses
• Don’t hard-code access credentials within applications
• Limit network traffic to point-to-point communications
Control Partners
There are many different types of control partners. The individuals can be
auditors, operational risk or compliance processionals, or regulators.
Even within these broad types of control partners, there can be
subcategories. For example, financial auditors focus on financial
operations. They look at the completeness, fairness, and representation in
the organization’s financial statements. They look at the underlying
processes and operations that produced the financial data. FinancialFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 13/20
auditors look at any potential control weaknesses that may call into
question the accuracy of the financial statements.
Technology auditors are often referred to as IT auditors. They look at an
organization’s technology controls and risks. They assess the controls for
design and effectiveness. They ask questions such as “Do the controls
address all the vulnerabilities and are they working well?” They also look
at how well an organization assesses technology risk in context of the
business processes.
Although financial and technology audit teams have distinct
responsibilities, they often collaborate. This is referred to as an integrated
audit. In an integrated audit, more than one audit discipline is combined for a single audit. For example, let’s assume a company
purchases large amounts of equipment for its manufacturing process. The
accuracy of the company’s financial statements depends in part on
properly reporting these expenditures. Financial auditors look at the
underlying financial data and accounting methods used to reflect these
investments. Financial auditors may look at the depreciation method
used. They might even challenge the completeness of the data. On the
other hand, IT auditors focus on the underlying technology that captures,
records, and calculates the financial results. IT auditors look at the
security controls and the integrity of the data.
FYI
The authority to conduct audits depends on the type of organization.
For example, government agencies are subject to audits through legal
statutes and directives. A private company may be subject to audit
requirements set by its board of directors. Many publicly traded
companies adopt an audit committee structure. This is a subcommittee
of the board of directors formed to focus on audit matters.
Operational risk and compliance teams often have the same need for
access as do auditors. The operational risk team reviews the controls to
ensure a business is operating within the acceptable risk appetite. This
means that the business is not taking on too much risk. Compliance teams
ensure that the business is following the law. Like auditors, these teams
are specialized. They look at a variety of controls.
In a public company, an auditor reports findings to the business unit
management and to the audit committee. Operational risk and
compliance reports are often sent to the business and the company’s risk
committee. This dual reporting serves several goals. It ensures that line
management knows about control weaknesses so immediate action can be
taken. It also ensures that risks get visibility at the highest level of the
organization.
Security policies detail the controls in granting control partners access.
For an IT audit this typically means access to security reports, logs, and
configuration information. Auditors primarily need read access. Auditors
have specialized tools that help analyze samples taken and record their
findings. Within these specialized applications they are granted
appropriate rights to capture the evidence and write audit reports.
Contingent
Contingent accounts need unlimited rights to install, configure, repair, and recover networks and applications, and to restore data. With this
elevated access comes enormous responsibility to protect credentials.
These credentials are prime targets for hackers. These IDs are not
assigned to individuals until a disaster recovery event is declared. As a
result, they must be protected until they are needed.
One challenge is to know whether these IDs will work during a disaster.
For example, assume a data center is hit by a hurricane and the systems
must be restored across town. That’s not the time to find out you don’t
have access to the backup data. It’s important that these contingent IDs
be tested at least annually. It’s equally important that, during these
recovery tests, IDs that are no longer needed be identified and deleted.
System
System accounts often need elevated privileges to start, stop, and manage
system services. These accounts can be interactive or non-interactive. The
word interactive typically refers here to the ability for a person to log on to the account. A system account that is non-interactive is one to which a
person cannot log on. An interactive system account has a password that,
if known, can be used by a person to log on to the account. System
accounts are also referred to as service accounts.
Why this distinction between interactive and non-interactive service
accounts? These accounts usually have elevated privileges. That makes
them targets for hackers. Ideally, you wouldn’t want any system accounts
to be interactive. That way there would be no passwords to steal, all
accounts would be tied to specific applications, and hackers would haveFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 14/20
less opportunity to get in. But this is not an ideal world. Many system
accounts have passwords that can be stolen. Strict controls must be in
place to protect passwords for interactive system accounts. A firecall-ID
process, as previously described, can provide such controls to restrict
access to these sensitive accounts.
Why Govern Users with Policies?
Organizations want a single view of risk. Decision-making becomes easier,
as does talking with regulators or shareholders. Security policies offer a
common way to view and control risks. In addition, regulations require
the implementation of security policies. A few examples include the
Sarbanes-Oxley (SOX) Act of 2002 and the Health Insurance Portability
and Accountability Act (HIPAA). This is not unique to the United States.
Global organizations face an array of similar laws and regulations, such as
the European Data Protection Directive.
Having well-defined policies that govern user behavior ensures key risks
are controlled in a consistent manner. These policies provide evidence of
compliance to regulators. Regulators are increasingly looking at how
security policies are applied. It’s not enough to have written policies.
Regulators also want to see evidence that these policies are enforced.
Acceptable Use Policy (AUP)
It is important to set clear expectations for what’s acceptable behavior for
those using an organization’s technology assets. An AUP defines the
intended uses of computers and networks, including unacceptable uses
and the consequences for violation of policy. An AUP also prohibits
accessing or storing offensive content. The following topics are typically
found in an AUP:
• Basics of protecting an organization’s computers and network
• Managing passwords
• Managing software licenses
• Managing intellectual property
• E-mail etiquette
• Level of privacy an individual should expect when using an
organization’s computer or network
• Noncompliance consequences
A good AUP should also be accompanied with awareness training. This
training should address realistic scenarios an individual might face. The
following situations are a few examples of what might show up in AUP
awareness training:
• A coworker asks you to log on to the network or an application because
he or she is waiting for access to be approved. What should you do?
• You receive a politically sensitive joke via e-mail. Should you forward
the e-mail?
• The person next to you spends many hours a day surfing the Internet
for stock tips. What should you do?
The Privileged-Level Access Agreement (PAA)
When administrative rights are breached or abused the impact can be
catastrophic to the organization. A privileged-level access agreement (PAA) is designed to heighten the awareness and accountability of those users who have administrative rights. The PAA is a formal agreement
signed by an administrator acknowledging his or her responsibilities. The
agreement basically says the administrator will protect these sensitive
credentials and not abuse his or her authority. The PAA is an enhanced
form of security awareness specifically for administrators.
NOTE
The federal government uses PAAs in the defense industry. However, few organizations outside the defense industry have adopted PAA use.
The PAA is typically a one- to two-page document. It reads as a formal
agreement between the administrator and the organization. The PAA
generally contains the following from the administrator’s perspective:
1. Acknowledgment of the risk associated with elevated access in the event the credentials are breached or abused
2. Promise not to share the credentials entrusted to his or her care
Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 15/20
3. Promise to use the access granted only for approved organization business
4. Promise not to attempt to “hack” or breach security
5. Promise to protect any output from these credentials such as reports, logs, files, and downloads
6. Promise to report any indication of a breach or intrusion promptly
7. Promise not to tamper with, modify, or remove any security controls without authorization
8. Promise not to install any backdoor, malicious code, or unauthorized hardware or software
9. Promise not to violate intellectual property rights, copyrights, or trade secrets
10. Promise not to access or store inflammatory material, such as pornographic or racist content
11. Promise not to browse data that is not directly related to assigned tasks
12. Promise to act in good faith and be subjected to penalties under breach of contract and criminal statutes
In many respects, these items are already covered by security policies and
awareness training. The PAA reinforces the importance of these terms
with administrators.
Security Awareness Policy (SAP)
Security awareness training is often the first view a typical user has into
information security. It’s often required for all new hires. Think of it as
the first impression of management’s view of information security. This is
management’s opportunity to set the tone. Most individuals want to do a
good job. But they need to know what the rules and expected behavior are.
A good security awareness policy has many benefits, including informing
workers of the following:
• Basic principles of information security
• Awareness of risk and threats
• How to deal with unexpected risk
• How to report suspicious activity, incidents, and breaches
• How to help build a culture that is security and risk aware
Security policy is not just a good idea—it’s the law! There are many
regulations that require security policies and a security awareness
program. Many state laws also require security awareness. Having a
security awareness program is considered in most industries a best
practice. The following list highlights a number of federal mandates that
require an organization to have a security awareness programs:
• The Health Insurance Portability and Accountability Act (HIPAA)
• Gramm-Leach-Bliley Act
• Sarbanes-Oxley Act
• Federal Information Security Management Act (FISMA)
• National Institute of Standards and Technology (NIST) Special
Publications 800-53, “Recommended Security Controls for Federal
Information Systems”
• 5 Code of Federal Regulations (C.F.R.)
• The NIST Guide for Developing Security Plans for Information
Technology Systems
• Office of Management and Budget (OMB) Circular A-130, Appendix III
• The NIST Computer Security Handbook
Laws can outline the frequency and target audience of awareness training.
For example, 5 C.F.R. requires security awareness training before an
individual can access information. Also a refresher course must be taken
annually. The following outlines the 5 C.F.R. requirements:
• All users—Security basics
• Executives—Policy level and governance
• Program and functional managers—Security management, planning, and implementation; also risk management and contingency
planningFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 16/20
• Chief information officers (CIOs)—Broad training in security planning, system and application security management, risk
management, and contingency planning
• IT security program managers—Broad training in security planning, system and application security management, risk
management, and contingency planning
• Auditors—Broad training in security planning, system and application security management, risk management, and contingency planning
• IT function management and operations personnel—Broad training in security planning and system/application security
management, system/application life cycle management, risk
management, and contingency planning
For information security policies to deliver value, they must explain how
to manage risk and proactively address threats. A well-planned security
awareness program can be a cornerstone to accomplish this objective.
Communication of security policy through a security awareness program
is vital. Even the best policy is of little use if no one is aware of it. Security
awareness changes behavior. Security awareness consists of a series of
campaigns aimed at improving understanding of security policies and
risks. Security awareness is not a one-time event. It’s a campaign that
strives to keep reinforcing the message in different ways.
Best Practices for User Domain Policies
A best practice is a leading technique, methodology, or technology that
through experience has proved to be very reliable. Best practices tend to
produce a consistent and quality result. The following short list of best
practices focuses on the user and is found in security policies. These best
practices go a long way toward protecting users and the organization.
Policies should require the following practices:
• Attachments—Never open an e-mail attachment from a source that is not trusted or known.
• Encryption—Always encrypt sensitive data that leaves the confines of a secure server; this includes encrypting laptops, backup tapes, e-mails,
and so on.
• Layered defense—Use an approach that establishes overlapping layers of security as the best way to mitigate threats.
• Least privilege—The principle of least privilege is that individuals should have only the access necessary to perform their responsibilities.
• Best fit privilege—The principle of best fit access privilege holds that individuals should have the limited access necessary to fulfill their
responsibilities and have their access managed efficiently.
• Patch management—Be sure all network devices have the latest security patches including user desktop and laptop computers. Patch
management is an essential part of a layered defense. Even when you do
everything right, there may be a vulnerability in the vendor’s system or
application. An effective patch management program mitigates many of
these risks.
• Unique identity—All users accessing information must use unique credentials that identify who they are; the only exception is public access
of a publicly facing Web site.
• Virus protection—Virus and malware prevention must be installed on every desktop and laptop computer.
Understanding Least Access Privileges and Best Fit Privileges
The difference between least access privileges and best fit access privileges can be confusing and subtle. Both control risk by limiting access associated with a specific job or role. The difference is that least
privileges customize access to the individual, while best fit privileges
typically customize access to the group or class of users.
For example, suppose you have four accounts receivable specialists.
Accounts receivable teams typically collect on invoices due to a company.
Of the four specialists, two work on commercial accounts and two work on
individual accounts. Their access is the same, except that the commercial
receivables specialists also require access to market information about the
companies related to the commercial accounts. Under least privileges, you
might choose to limit access to the market information to just the
commercial receivables specialist. However, this decision comes at a cost.
You would have to maintain two sets of access rules for basically the same
job.
When you multiply these subtle difference across large populations of
users and technologies, these rule differences can be quite complex and
expensive to maintain. Best fit privileges would look at the risk of giving
access to market data to the two specialists working with noncommercialFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 17/20
accounts. If there is little to no risk of fraud or security exposure, then all
four specialists may get the same access. Typically, this means assigning
access to a receivables specialist role, and then assigning all four
individuals to the role. Using this best fit risk-based approach to assigning
access can lower support costs and simplify access rules.
Case Studies and Examples of User Domain Policies
The case studies in this section reflect actual risks that were exploited in
the real world. Each case study examines potential root causes. By looking
at these case studies in the context of security policies, you identify how
they can be avoided.
The case studies examined in this section relate to security policy
violations, a lack of separation of duties, and poor vendor management.
The studies involve the compromise of a government laptop, the collapse
of Barings Bank in 1995, and unauthorized access to government systems.
Government Laptop Compromised
On October 31, 2012, NASA notified its employees that a laptop
containing personal information on more than 10,000 employees was
stolen. The theft occurred when a laptop containing the information was
taken from a locked car. The laptop had a password, but the hard drive
was not encrypted. The NASA announcement included a statement that
the IT security policies and practices were under review. Additionally,
several immediate actions were undertaken, including requiring that all
laptops that leave NASA facilities be encrypted.
While the details of the theft are unclear, what is clear is that the laptop
was left unattended in a locked car. At many organizations, that would be
considered a violation of acceptable use policy. Leaving a laptop with
sensitive information unattended is not good practice. Typically, such
policies require someone to maintain physical possession of devices when
they are brought into public spaces, and to carry them into airline cabins
rather than leave them in checked bags.
Also, full disk encryption is commonplace in the industry. For NASA not
to require full disk encryption and to permit sensitive information to be
placed on a laptop is to be out of compliance with industry norms.
In this case, this was a failure of policy as much as individual actions. Had
the laptop been fully encrypted, the loss would have been limited to the
device itself. Although the theft probably indicated a violation of
acceptable use policy, the actual damage resulting in employees having
their personal information stolen and the impact on NASA’s reputation
could have been avoided.
The Collapse of Barings Bank, 1995
Barings Bank was one of the oldest investment banks in Britain. The bank
was founded in 1762. The bank was sold for less than $2 in 1995 after it
was discovered that an employee had lost over $1.3 billion of the bank’s
assets on the market. The bank could not cover its liabilities, having only
$615 in reserve capital. The case continues to be used by security experts
today as a classic example of the need for user oversight and effective
separation of duties controls.
The bank’s losses resulted when an arbitrage trader was placed in
multiple roles to manage trades, as well as to ensure that trades were
properly settled and reported. The arbitrage trader was the floor manager
for the Barings trading desk in Singapore. This is a front-office role that
allowed the employee to make trades. The arbitrage trader was also the
head of settlement operations in Singapore. This is a back-office role that
allowed him to effectively review and approve his own trades daily. It also
allowed him to falsify records and alter reports being sent to the home
office in London. There was a special account he used to cover up his
trades. He would create trades that would show profit in other accounts
while effectively moving the losses to this one account. He then made sure
that this one account with the losses was not reported to the home office.
This meant while the bank consistently lost millions it actually looked
profitable.
This shows a lack of separation of duties. The business should not have
permitted a single individual to hold both positions. The business should
have ensured the appropriate accounting and reporting systems were in
place. This was not just a failure of the business but also a failure of the
security policies on a number of levels. User Domain-level security
policies in this situation typically require:
• Risk assessment—A risk assessment is performed on major applications to identify risk of fraud.
• Controls design—Controls are designed based on the core principles in the security policies and weaknesses found during the risk assessment.
• Access management—Effective access management develops roles to ensure separation of duties and reports violations to leadership.
Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 18/20
• Escalation—Risks and threats to the business are escalated to the business unit, senior leadership, and as needed, to the board.
A risk assessment should have been performed when the application was
first implemented and periodically thereafter. This was clearly a high-risk
application and business process that represented significant income and
risk to the bank. Security policies would require a threat assessment
looking at external and internal sources. The risk assessment would have
examined leading industry practices. This would have identified the clear
need to separate access rights between front and back offices. The result
would be a good understanding on the type of access and reporting
controls needed.
The controls design process required by security policies would
implement the needed security controls within the application and the
business process. These controls ensure separate security roles would be
implemented for front and back office. The controls also identify key
monitoring reports necessary to detect fraud. Had the home office
received monitoring reports on accounts, including the one hiding the
losses, the fraud would have been detected. The trader did falsify reports,
which means either there was a controls design failure or at least a
privilege principle violation. Either the controls design failed to produce
the report or the control allowed the trader to delete off-key monitoring
information.
Access management processes detect when controls on separation of
duties are violated. At the time the trader access was requested for both
the front- and back-office applications, a separation of duties violation
would have been detected. This assumes such a control was put in place.
Ideally, the request for access that violates separation of duties should be
immediately denied. Even if the trader was somehow granted the access, a
detective control flags the violation to be resolved by the business unit
leadership.
Escalation is used when risks are being addressed. In this case, escalation could have occurred during the performance of a risk
assessment, controls design, or access management process. Anywhere
within these security processes the risk should have been detected and
reported to management for resolution. In ordinary situations, the CISO
is required to escalate events and risks if a business unit is not responsive.
The escalation path varies depending on the organization. These types of
separation of duties risks are reported to executive leadership, the chief
risk officer, or both. The reporting of such risks are picked up by the
auditors who follow up to see how the risk was resolved. This provides the
auditor evidence that the escalation process is working effectively.
To further understand how the bank’s losses occurred, let’s examine what
arbitrage trading is. Arbitrage is the simultaneous buying and selling of an
asset in two different markets at two different prices. It’s this price
difference between the two market prices where you make a profit or take
a loss. Because there’s typically very little difference in prices between the
markets, the trader must make very large trades to make any meaningful
profit and to cover the cost of the trade itself.
In the case of Barings, the trader was trading in the billions of dollars,
betting that the Tokyo stock market (known as the Nikkei) would rise.
Instead of buying and selling, this trader just bought positions to sell at a
later time. This was not an arbitrage trade. He was betting that he knew
what the market was going to do. The losses started small. He kept
increasing the trades in the hope of making up the losses. Eventually the
trades put billions at risk. This went on for a year, until he could no longer
sustain the fraud.
It’s not unusual for fraud to go on for years before its detected. That’s why
security policies typically require an active risk assessment program. You
should review highrisk applications in light of new and emerging threats.
Unauthorized Access to Defense Department Systems
The Wall Street Journal reported in April 2010 that spies had hacked into
the Pentagon’s $300 billion Joint Strike Fighter project computers.
Several terabytes of data was stolen. The Joint Strike Fighter, also known
as the F-35 Lightning II, relies on 7.5 million lines of computer code.
According to the U.S. Government Accountability Office, that’s more than
triple the number of lines used in the current top Air Force fighters.
The public may never know the details given the highly sensitive nature of
the breach. The article did note that the intruders entered through
vulnerabilities in the networks of contractors helping to build the jet. The
intrusion had been going on since at least 2008. The article went on to say
it’s difficult to protect against vendors with uneven security. Also the
Pentagon is getting more engaged with the vendor to improve security.
Although the details of the breach are not public, we do know that there
was a vendor management problem. For whatever reasons, the vendor
networks did not meet the control requirements of the Pentagon and as
result the system was breached. Find answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 19/20
The problems highlighted in the article apply equally to private and public
sectors. Vendors are critical to many organizations. Yet organizations
depend on vendors providing the same level of controls that organization
would provide for itself. That’s why security policies require vendor
assessment to gain that assurance. A vendor assessment is a specialized
risk assessment that looks at the vendor’s environment if there is risk to
the data if you use their services. For example, the organization acquiring
the service of a vendor compares the organization’s User Domain policies
and controls to the vendor’s. Any differences are further examined to
determine if they represent a control weakness. These controls
weaknesses could be exploited to compromise the organization’s data.
CHAPTER SUMMARY
This chapter examined the risk associated with the User Domain, part of
the seven domains of an IT infrastructure. As the number of users grows
on the network, their diverse needs also grow. Security policies are a
structured way of managing the user related risks in this complex
environment. The chapter reviewed the many different types of users and
discussed unique roles such as administrator, security, and auditor. With
these roles often comes elevated privilege and enormous responsibilities.
Security policies are an effective way to reduce risks and govern users.
They help identify the higher risk activities such those performed by
systems administrators. The policies are based on principles that help
apply security consistently. These principles include core concepts such
as least access privileges and best fit privileges. The principles lay out
risk choices and must strike a balance between cost to maintain and risks
to control. In the end, security policies can educate users, reduce human
error, and be used to better understand how incidents occurred.
KEY CONCEPTS AND TERMS
Best fit access privileges
Chain of custody
Contingent accounts
Contractors
Escalation
Firecall-ID process
Harden Insider
Integrated audit
Interactive
Least access privileges
Log server
Pretexting
Privileged-level access agreement (PAA)
Security personnel
Structured Query Language (SQL) injection
Systems administrators
Trouble ticket
CHAPTER 9 ASSESSMENT
1. Pretexting is what happens when a hacker breaks into a firewall.
A. True
B. False
2. You can use a _______ process to grant temporary elevated rights.
3. Security awareness is required by which of the following?
A. LawFind answers on the fly, or master something new. Subscribe today. See pricing options.
⬆
https://learning.oreilly.com/subscribe/
7/5/2019 Chapter 9 User Domain Policies - Security Policies and Implementation Issues, 2nd Edition
https://learning.oreilly.com/library/view/security-policies-and/9781284055993/18_ch9.xhtml 20/20
Recommended / Playlists / History / Topics / Settings / Get the App / Sign Out © 2019 Safari. Terms of Service / Privacy Policy
B. Customers
C. Shareholders
D. All of the Above
4. A(n) _______ looks at risk and issues an independent opinion.
5. A privileged-level access agreement (PAA) prevents an administrator from abusing elevated rights.
A. True
B. False
6. Which of the following does an acceptable use policy relate to?
A. Server-to-server communication
B. Users accessing the Internet
C. Encryption when transmitting files
D. A and B
7. A(n) _______ has inside information on how an organization operates.
8. Social engineering occurs when a hacker posts her victories on a social Web site.
A. True
B. False
9. Typically in large organizations all administrators have the same level of authority.
A. True
B. False
10. A CISO must _______ risks if the business unit is not responsive.
11. What is the difference between least access privileges and best fit access privileges?
A. Least access privileges customize access to an individual
B. Best fit privileges customize access to a group based on risk
C. No difference
D. A and B
12. System accounts are also referred to as _______ accounts.
13. An interactive service account typically does not have a password.
A. True