Brief Contents
PREFACE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
CHAPTER 1 An Overview of Information Security and Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
CHAPTER 2 Planning for Organizational Readiness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
CHAPTER 3 Contingency Strategies for IR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
CHAPTER 4 Incident Response: Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
CHAPTER 5 Incident Response: Detection and Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
CHAPTER 6 Incident Response: Organizing and Preparing the CSIRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
CHAPTER 7 Incident Response: Response Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
CHAPTER 8 Incident Response: Recovery and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
CHAPTER 9 Disaster Recovery: Preparation and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
CHAPTER 10 Disaster Recovery: Operation and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
CHAPTER 11 Business Continuity Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
CHAPTER 12 Crisis Management and International Standards inIR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
APPENDIX A Sample Business Continuity Plan for ABC Co. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
APPENDIX B Contingency Plan Template from the Computer Security Resource Center at the National Institute of Standards and Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
APPENDIX C Sample Crisis Management Plan for Hierarchical Access, Ltd.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
GLOSSARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
v
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Table of Contents
PREFACE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
CHAPTER 1 An Overview of Information Security and Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Opening Case Scenario: Pernicious Proxy Probing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Information Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Key Information Security Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Overview of Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Know Yourself. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Know the Enemy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Risk Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Risk Control Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Contingency Planning and Its Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Business Impact Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Incident Response Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Disaster Recovery Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Business Continuity Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Contingency Planning Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Role of Information Security Policy in Developing Contingency Plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Key Policy Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Enterprise Information Security Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Issue-Specific Security Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Systems-Specific Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Policy Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Ethical Considerations in the Use of Information Security Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Closing Case Scenario: Pondering People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
CHAPTER 2 Planning for Organizational Readiness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Opening Case Scenario: Proper Planning Prevents Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Beginning the Contingency Planning Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Commitment and Support of Senior Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Elements Required to Begin Contingency Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Contingency Planning Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
A Sample Generic Policy and High-Level Procedures for Contingency Plans . . . . . . . . . . . . . . . . . . . . . . . . . . 55
vii
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Business Impact Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Determine Mission/Business Processes and Recovery Criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Identify Resource Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Identify System Resource Recovery Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
BIA Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Online Questionnaires . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Facilitated Data-Gathering Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Process Flows and Interdependency Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Risk Assessment Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 IT Application or System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Financial Reports and Departmental Budgets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Audit Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Production Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Budgeting for Contingency Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Incident Response Budgeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Disaster Recovery Budgeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Business Continuity Budgeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Crisis Management Budgeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Closing Case Scenario: Outrageously Odd Outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
CHAPTER 3 Contingency Strategies for IR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Opening Scenario: Panicking over Powder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Data and Application Resumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Online Backups and the Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Disk to Disk to Other: Delayed Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Redundancy-Based Backup and Recovery Using RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Database Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Application Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Backup and Recovery Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Real-Time Protection, Server Recovery, and Application Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Site Resumption Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Exclusive Site Resumption Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Shared-Site Resumption Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Service Agreements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Hands-On Project 3-1: Command-line Backup Using rdiff-backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Hands-On Project 3-2: Copying Virtual Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Closing Case Scenario: Disaster Denied . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
viii Table of Contents
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
CHAPTER 4 Incident Response: Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Opening Case Scenario: DDoS Dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
The IR Planning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Forming the IR Planning Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Developing the Incident Response Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Building the Computer Security Incident Response Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Incident Response Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Information for attack success end case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Planning for the Response During the Incident . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Planning for “After the Incident”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Reaction!. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Planning for “Before the Incident” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
The CCDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Assembling and Maintaining the Final IR Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Closing Case Scenario: The Never-Ending Story . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
CHAPTER 5 Incident Response: Detection and Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Opening Case Scenario: Oodles of Open Source Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Detecting Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Possible Indicators of an Incident. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Probable Indicators of an Incident . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Technical Details: Rootkits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Definite Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Identifying Real Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Intrusion Detection and Prevention Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Technical Details: Processes and Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 IDPS Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Why Use an IDPS? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 IDPS Network Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Technical Details: Ports and Port Scanning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 IDPS Detection Approaches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Automated Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Incident Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Collection of Data to Aid in Detecting Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Challenges in Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Table of Contents ix
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Closing Case Scenario: Jokes with JJ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
CHAPTER 6 Incident Response: Organizing and Preparing the CSIRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Opening Case Scenario: Trouble in Tuscaloosa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Building the CSIRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Step 1: Obtaining Management Support and Buy-In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Step 2: Determining the CSIRT Strategic Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Step 3: Gathering Relevant Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Step 4: Designing the CSIRT Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
A Sample Generic Policy and High-Level Procedures for Contingency Plans . . . . . . . . . . . . . . . . . . . . . . . . . 243 Step 5: Communicating the CSIRT’s Vision and Operational Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Step 6: Beginning CSIRT Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Step 7: Announce the operational CSIRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Step 8: Evaluating CSIRT Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Final Thoughts on CSIRT Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Outsourcing Incident Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Current and Future Quality of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Division of Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Sensitive Information Revealed to the Contractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Lack of Organization-Specific Knowledge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Lack of Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Handling Incidents at Multiple Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Maintaining IR Skills In-House . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Closing Case Scenario: Proud to Participate in Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
CHAPTER 7 Incident Response: Response Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Opening Case Scenario: Viral Vandal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
IR Response Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Response Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Incident Containment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
The Cuckoo’s Egg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Incident Eradication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Incident Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Incident Containment and Eradication Strategies for Specific Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Egghead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Handling Denial of Service (DoS) Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
x Table of Contents
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Malware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Unauthorized Access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Inappropriate Use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Hybrid or Multicomponent Incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Automated IR Response Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Closing Case Scenario: Worrisome Worms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
CHAPTER 8 Incident Response: Recovery and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Opening Case Scenario: Wily Worms Wake Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Identify and Resolve Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Restore Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Restore Services and Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Restore Confidence across the Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 After-Action Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Plan Review and Maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Rehearsal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Law Enforcement Involvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Reporting to Upper Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Loss Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Sample Impact Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Incident Forensics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Legal Issues in Digital Forensics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Digital Forensics Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Technical Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Digital Forensics Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
eDiscovery and Anti-Forensics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Closing Case Scenario: Bureaucratic Blamestorms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
CHAPTER 9 Disaster Recovery: Preparation and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Opening Case Scenario: Flames Force Fan Fury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Disaster Classifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Table of Contents xi
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Forming the Disaster Recovery Team. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Organization of the DR Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Special Documentation and Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Disaster Recovery Planning Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Develop the DR Planning Policy Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Review the Business Impact Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Identify Preventive Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Develop Recovery Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Develop the DR Plan Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Plan Testing, Training, and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Plan Maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Information Technology Contingency Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Client/Server Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Data Communications Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Mainframe Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Sample Disaster Recovery Plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 The Business Resumption Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
The DR Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Closing Case Scenario: Proactively Pondering Potential Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
CHAPTER 10 Disaster Recovery: Operation and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Opening Case Scenario: Dastardly Disaster Drives Dialing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Facing Key Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Preparation: Training the DR Team and the Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Plan Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Plan Triggers and Notification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Disaster Recovery Planning as Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 DR Training and Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 DR Plan Testing and Rehearsal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Rehearsal and Testing of the Alert Roster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Disaster Response Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Recovery Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Resumption Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Restoration Phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Repair or Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Restoration of the Primary Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Relocation from Temporary Offices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Resumption at the Primary Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Standing Down and the After-Action Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
xii Table of Contents
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Closing Case Scenario: Smart Susan Starts Studying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
CHAPTER 11 Business Continuity Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Opening Case Scenario: Lovely Local Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Business Continuity Team. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 BC Team Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Special Documentation and Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Business Continuity Policy and Plan Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Develop the BC Planning Policy Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Review the BIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Identify Preventive Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Create BC Contingency (Relocation) Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 Develop the BC Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Ensure BC Plan Testing, Training, and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Ensure BC Plan Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Sample Business Continuity Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Implementing the BC Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Preparation for BC Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Returning to a Primary Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 BC After-Action Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Continuous Improvement of the BC Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Improving the BC Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Improving the BC Staff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Maintaining the BC Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Periodic BC Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 BC Plan Archivist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Closing Case Scenario: Exciting Emergency Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
CHAPTER 12 Crisis Management and International Standards inIR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Opening Case Scenario: Terrible Tragedy Today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
Crisis Management in the Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Crisis Terms and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Crisis Misconceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
Preparing for Crisis Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 General Preparation Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Organizing the Crisis Management Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Table of Contents xiii
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Crisis Management Critical Success Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Developing the Crisis Management Plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Crisis Management Training and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
Ongoing Case: Alert Roster Test at HAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
Post-crisis Trauma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 Posttraumatic Stress Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 Employee Assistance Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 Immediately after the Crisis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Getting People Back to Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 Dealing with Loss. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
Law Enforcement Involvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Federal Agencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 Local Agencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
Managing Crisis Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Crisis Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
The 11 Steps Of Crisis Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Avoiding Unnecessary Blame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
Succession Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 Elements of Succession Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Succession Planning Approaches for Crisis Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
International Standards in IR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 NIST Standards and Publications in IR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 ISO Standards and Publications in IR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 Other Standards and Publications in IR/DR/BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Review Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Real-World Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Hands-On Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Closing Case Scenario: Boorish Board Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
APPENDIX A Sample Business Continuity Plan for ABC Co. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
APPENDIX B Contingency Plan Template from the Computer Security Resource Center at the National Institute of Standards and Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
APPENDIX C Sample Crisis Management Plan for Hierarchical Access, Ltd. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
GLOSSARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
xiv Table of Contents
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Preface
As global networks expand the interconnection of the world’s technically complex infra- structure, communication and computing systems gain added importance. Information secu- rity has gained in importance as a professional practice, and information security has emerged as an academic discipline. Recent events, such as malware attacks and successful hacking efforts, have pointed out the weaknesses inherent in unprotected systems and exposed the need for heightened security of these systems. In order to secure technologically advanced systems and networks, both education and the infrastructure to deliver that educa- tion are needed to prepare the next generation of information technology and information security professionals to develop a more secure and ethical computing environment. There- fore, improved tools and more sophisticated techniques are needed to prepare students to recognize the threats and vulnerabilities present in existing systems and to design and develop the secure systems needed in the near future. Many years have passed since the need for improved information security education has been recognized, and as Dr. Ernest McDuffie of NIST points out:
While there is no doubt that technology has changed the way we live, work, and play, there are very real threats associated with the increased use of technology and our growing dependence on cyberspace….
Education can prepare the general public to identify and avoid risks in cyber- space; education will ready the cybersecurity workforce of tomorrow; and
xv
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
education can keep today’s cybersecurity professionals at the leading edge of the latest technology and mitigation strategies.
Source: NIST
The need for improvements in information security education is so great that the U.S. National Secu- rity Agency (NSA) has established Centers of Academic Excellence in Information Assurance, as described in Presidential Decision Directive 63, “The Policy on Critical Infrastructure Protection,” May 1998:
The program goal is to reduce vulnerabilities in our National Information Infrastructure by promoting higher education in information assurance, and producing a growing num- ber of professionals with IA expertise in various disciplines.
Source: National Security Agency
The technical nature of the dominant texts on the market does not meet the needs of students who have a major other than computer science, computer engineering, or electronic engineering. This is a key concern for academics who wish to focus on delivering skilled undergraduates to the commer- cial information technology (IT) sector. Specifically, there is a clear need for information security, information systems, criminal justice, political science, and accounting information systems students to gain a clear understanding of the foundations of information security.
Approach This book provides an overview of contingency operations and its components as well as a thorough treatment of the administration of the planning process for incident response, disaster recovery, and business continuity. It can be used to support course delivery for information-security-driven programs targeted at information technology students, as well as IT management and technology management curricula aimed at business or technical management students.
Learning Support—Each chapter includes a Chapter Summary and a set of open-ended Review Questions. These are used to reinforce learning of the subject matter presented in the chapter.
Chapter Scenarios—Each chapter opens and closes with a case scenario that follows the same fic- tional company as it encounters various contingency planning or operational issues. The closing sce- nario also includes a few discussion questions. These questions give the student and the instructor an opportunity to discuss the issues that underlie the content.
Hands-On Learning—At the end of each chapter, Real-World Exercises and Hands-On Projects are provided. These give students the opportunity to examine the contingency planning arena outside the classroom. Using these exercises, students can pursue the learning objectives listed at the begin- ning of each chapter and deepen their understanding of the text material.
Boxed Examples—These supplemental sections, which feature examples not associated with the ongoing case study, are included to illustrate key learning objectives or extend the coverage of plans and policies.
New to This Edition This edition provides a greater level of detail than the previous edition, specifically in the examination of incident response activities. It incorporates new approaches and methods that have been developed at NIST. Although the material on disaster recovery, business continuity, and crisis management has not
xvi Preface
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
been reduced, the text’s focus now follows that of the IT industry in shifting to the prevention, detection, reaction to, and recovery from computer-based incidents and avoidance of threats to the security of infor- mation. We are fortunate to have had the assistance of a reviewer who worked as a contributing author for NIST, ensuring alignment between this text and the methods recommended by NIST.
Author Team Long-time college professors and information security professionals Michael Whitman and Herbert Mattord have jointly developed this text to merge knowledge from the world of academic study with practical experience from the business world. Professor Andrew Green has been added to this proven team to add a new dimension of practical experience.
Michael Whitman, Ph.D., CISM, CISSP Michael Whitman is a professor of information security and assurance in the Information Systems Department, Michael J. Coles College of Business at Ken- nesaw State University, Kennesaw, Georgia, where he is the director of the KSU Center for Informa- tion Security Education (infosec.kennesaw.edu). Dr. Whitman has over 20 years of experience in higher education, with over 12 years of experience in designing and teaching information security courses. He is an active researcher in information security, fair and responsible use policies, and computer-use ethics. He currently teaches graduate and undergraduate courses in information secu- rity. He has published articles in the top journals in his field, including Information Systems Research, Communications of the ACM, Information and Management, Journal of International Business Studies, and Journal of Computer Information Systems. He is a member of the Association for Computing Machinery and the Association for Information Systems. Under Dr. Whitman’s lead- ership, Kennesaw State University has been recognized by the National Security Agency and the Department of Homeland Security as a National Center of Academic Excellence in Information Assurance Education three times; the university’s coursework has been reviewed by national-level information assurance subject matter experts and determined to meet the national training standard for information systems security professionals. Dr. Whitman is also the coauthor of Principles of Information Security, 4th edition; Management of Information Security, 4th edition; Readings and Cases in the Management of Information Security; Readings and Cases in Information Security: Law and Ethics; The Hands-On Information Security Lab Manual, 3rd edition; Roadmap to the Management of Information Security for IT and Information Security Professionals; Guide to Fire- walls and VPNs, 3rd edition; Guide to Firewalls and Network Security, 2nd edition; and Guide to Network Security, all published by Course Technology. In 2012, Dr. Whitman was selected by the Colloquium for Information Systems Security Education as the recipient of the 2012 Information Assurance Educator of the Year award.
Herbert Mattord, Ph.D. CISM, CISSP Herbert Mattord completed 24 years of IT industry experi- ence as an application developer, database administrator, project manager, and information security practitioner before joining the faculty of Kennesaw State University in 2002. Dr. Mattord is an assistant professor of information security and assurance and the coordinator for the Bachelor of Business Administration in Information Security and Assurance program. He is the operations man- ager of the KSU Center for Information Security Education and Awareness (infosec.kennesaw.edu) as well as the coordinator for the KSU certificate in Information Security and Assurance. During his career as an IT practitioner, Dr. Mattord has been an adjunct professor at: Kennesaw State Uni- versity; Southern Polytechnic State University in Marietta, Georgia; Austin Community College in Austin, Texas; and Texas State University: San Marcos. He currently teaches undergraduate courses in information security, data communications, local area networks, database technology, project management, systems analysis and design, and information resources management and policy. He
Preface xvii
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
was formerly the manager of corporate information technology security at Georgia-Pacific Corpora- tion, where much of the practical knowledge found in this textbook was acquired. Professor Mat- tord is also the coauthor of Principles of Information Security, 4th edition; Management of Informa- tion Security, 4th edition; Readings and Cases in the Management of Information Security; Readings and Cases in Information Security: Law and Ethics; The Hands-On Information Security Lab Man- ual, 3rd edition; Roadmap to the Management of Information Security for IT and Information Security Professionals; Guide to Firewalls and VPNs, 3rd edition; Guide to Firewalls and Network Security, 2nd edition; and Guide to Network Security, all published by Course Technology.
Andrew Green, MSIS Andrew Green is a lecturer of information security and assurance in the Informa- tion Systems Department, Michael J. Coles College of Business at Kennesaw State University, Kennesaw, Georgia. Mr. Green has over a decade of experience in information security. Prior to entering academia full time, he worked as an information security consultant, focusing primarily on the needs of small and medium-sized businesses. Prior to that, he worked in the healthcare IT field, where he developed and supported transcription interfaces for medical facilities throughout the United States. Mr. Green is also a full-time Ph.D. student at Nova Southeastern University, where he is studying information systems with a concentration in information security. He is the coauthor of Guide to Firewalls and VPNs, 3rd edition andGuide to Network Security, both published by Course Technology.
Structure The textbook is organized into 12 chapters and 3 appendices. Here are summaries of each chapter’s contents:
Chapter 1. An Overview of Information Security and Risk Management This chapter defines the concepts of information security and risk management and explains how they are integral to the management processes used for incident response and contingency planning.
Chapter 2. Planning for Organizational Readiness The focus of this chapter is on how an organiza- tion can plan for and develop organizational processes and staffing appointments needed for suc- cessful incident response and contingency plans.
Chapter 3. Contingency Strategies for IR/DR/BC This chapter explores the relationships between contingency planning and the subordinate elements of incident response, business resumption, disas- ter recovery, and business continuity planning. It also explains the techniques used for data and application backup and recovery.
Chapter 4. Incident Response: Planning This chapter expands on the incident response planning process to include processes and activities that are needed as well as the skills and techniques used to develop such plans.
Chapter 5. Incident Response: Detection and Decision Making This chapter describes how incidents are detected and how decision making regarding incident escalation and plan activation occur.
Chapter 6. Incident Response: Organizing and Preparing the CSIRT This chapter presents the details of the actions that the CSIRT performs and how they are designed and developed.
Chapter 7. Incident Response: Response Strategies This chapter describes IR reaction strategies and how they are applied to incidents.
xviii Preface
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Chapter 8. Incident Response: Recovery and Maintenance This chapter describes how an organiza- tion plans for and executes the recovery process when an incident occurs; it also expands on the steps involved in the ongoing maintenance of the IR plan.
Chapter 9. Disaster Recovery: Preparation and Implementation This chapter explores how organi- zations prepare for disasters and recovery from disasters.
Chapter 10. Disaster Recovery: Operation and Maintenance This chapter presents the challenges an organization faces when engaged in DR operations and how such challenges are met.
Chapter 11. Business Continuity Planning This chapter covers how organizations ensure continu- ous operations even when the primary facilities used by the organization are not available.
Chapter 12. Crisis Management and International Standards in IR/DR/BC This chapter covers the role of crisis management and recommends the elements of a plan to prepare for crisis response. The chapter also covers the key international standards that affect IR, DR, and BC.
Appendices. The three appendices present sample BC and crisis management plans and templates.
Text and Graphic Conventions Wherever appropriate, additional information and exercises have been added to this book to help you better understand what is being discussed in the chapter. Icons throughout the text alert you to additional materials. The icons used in this textbook are described here:
Notes present additional helpful material related to the subject being described.
Offline boxes offer material that expands on the chapter’s contents but that may not be central to the learning objectives of the chapter.
Technical Details boxes provide additional technical information on informa- tion security topics.
Real World Exercises are structured activities to allow students to enrich their understanding of selected topics presented in the chapter by exploring Web- based or other widely available resources.
Hands-On Projects offer students the chance to explore the technical aspects of the theories presented in the chapter.
Preface xix
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Instructor’s Materials The following supplemental materials are available for use in a classroom setting. All the supple- ments available with this book are provided to the instructor on a single CD-ROM (ISBN: 9781111138066) and online at the textbook’s Web site.
Please visit login.cengage.com and log in to access instructor-specific resources.
To access additional course materials, please visit www.cengagebrain.com. At the CengageBrain.com home page, search for the ISBN of your title (from the back cover of your book) using the search box at the top of the page. This will take you to the product page, where these resources can be found.
Additional materials designed especially for you might be available for your course online. Go to www.cengage.com/coursetechnology and search for this book title periodically for more details.
Electronic Instructor’s Manual—The Instructor’s Manual that accompanies this textbook includes additional instructional material to assist in class preparation, including suggestions for classroom activities, discussion topics, and additional projects.
Solution Files—The Solution Files include answers to selected end-of-chapter materials, including the Review Questions and some of the Hands-On Projects.
ExamView—This textbook is accompanied by ExamView, a powerful testing software package that allows instructors to create and administer printed, computer (LAN-based), and Internet exams. ExamView includes hundreds of questions that correspond to the topics covered in this text, enabling students to generate detailed study guides that include page references for further review. The computer-based and Internet testing components allow students to take exams at their compu- ters, and also save the instructor time by grading each exam automatically.
PowerPoint Presentations—This book comes with Microsoft PowerPoint slides for each chapter. These are included as a teaching aid for classroom presentation. They can also be made available to students on the network for chapter review, or they can be printed for classroom distribution. Instruc- tors, feel free to add your own slides for additional topics you introduce to the class.
Information Security Community Site—Stay Secure with the Information Security Community Site! Connect with students, professors, and professionals from around the world, and stay on top of this ever-changing field.
● Visit www.cengage.com/community/infosec. ● Download resources such as instructional videos and labs. ● Ask authors, professors, and students the questions that are on your mind in our Discussion
Forums. ● See up-to-date news, videos, and articles. ● Read author blogs. ● Listen to podcasts on the latest Information Security topics.
Acknowledgments The authors would like to thank their families for their support and understanding for the many hours dedicated to this project, hours taken in many cases from family activities. Special thanks to Karen Scarfone, coauthor of several NIST SPs. Her reviews and suggestions resulted in a more read- able manuscript. Additionally, the authors would like to thank Doug Burks, primary developer of
xx Preface
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
the Security Onion project used in this textbook. Doug’s insight and suggestions for the Hands-On Projects helped make them more robust and practical for students to use.
Reviewers We are indebted to the following individuals for their respective contributions of perceptive feed- back on the initial proposal, the project outline, and the individual chapters of the text:
Karen Scarfone, Scarfone Cybersecurity Gary Kessler, Embry-Riddle Aeronautical University
Special Thanks The authors wish to thank the editorial and production teams at Course Technology. Their diligent and professional efforts greatly enhanced the final product:
Michelle Ruelos Cannistraci, Senior Product Manager
Kent Williams, Developmental Editor
Nick Lombardi, Acquisitions Editor
Andrea Majot, Senior Content Project Manager
Nicole Ashton Spoto, Technical Editor
In addition, several professional and commercial organizations and individuals have aided the development of the textbook by providing information and inspiration, and the authors wish to acknowledge their contribution:
Bernstein Crisis Management
Continuity Central
Information Systems Security Associations
Institute for Crisis Management
National Institute of Standards and Technology
Oracle, Inc.
Purdue University
Rothstein Associates, Inc.
SunGard
Our colleagues in the Department of Information Systems and the Michael J. Coles College of Business, Kennesaw State University
Dr. Amy Woszczynski, Interim Chair of the Department of Information Systems, Michael J. Coles College of Business, Kennesaw State University
Dr. Kathy Schwaig, Dean of the Michael J. Coles College of Business, Kennesaw State University
Our Commitment The authors are committed to serving the needs of the adopters and readers. We would be pleased and honored to receive feedback on the textbook and its supporting materials. You can contact us through Course Technology.
Preface xxi
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
chapter1
An Overview of Information Security and Risk Management
An ounce of prevention is worth a pound of cure. —Benjamin Franklin
Upon completion of this material, you should be able to: ● Define and explain information security ● Identify and explain the basic concepts of risk management ● List and discuss the components of contingency planning ● Describe the role of information security policy in the development of contingency plans
1
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Introduction This book is about being prepared for the unexpected, being ready for such events as incidents and disasters. We call this contingency planning, and the sad fact is that most organizations don’t incorporate it into their day-to-day business activities. Such organi- zations are often not well prepared to offer the proper response to a disaster or security incident. By July 2012, Internet World Stats estimated that there were over 2.4 billion people online,1 representing one third of the world’s 6.9 billion population. Each one of those online users is a potential threat to any online system. The vast majority of Inter- net users will not intentionally probe, monitor, attack, or attempt to access an organiza- tion’s information without authorization; however, that potential does exist. If even less than 1/10 of 1 percent of online users make the effort, the result would be almost two and a half million potential attackers.
Paul Alexander and his boss Amanda Wilson were sitting in Amanda’s office discussing the coming year’s budget when they heard a commotion in the hall. Hearing his name mentioned, Paul stuck his head out the door and saw Jonathon Jasper (“JJ” to his friends) walking quickly toward him.
“Paul!” JJ called again, relieved to see Paul waiting in Amanda’s office. “Hi, Amanda,” JJ said, then, looking at Paul, he added, “We have a problem.” JJ was
one of the systems administrators at Hierarchical Access LTD (HAL), a Georgia-based Internet service provider that serves the northwest region of metropolitan Atlanta.
Paul stepped out into the hall, closing Amanda’s door behind him. “What’s up, JJ?” “I think we’ve got someone sniffing around the e-mail server,” JJ replied. “I just
looked at the log files, and there is an unusual number of failed login attempts on accounts that normally just don’t have that many, like yours!”
Paul paused a moment. “But the e-mail server’s proxied,” he finally said to JJ, “which means it must be an
internal probe.” “Yeah, that’s why it’s a problem,” JJ replied. “We haven’t gotten this kind of thing
since we installed the proxy and moved the Web and e-mail servers inside the DMZ. It’s got to be someone in-house.”
JJ looked exasperated. “And after all that time I spent conducting awareness training!” “Don’t worry just yet,” Paul told him. “Let’s make a few calls, and then we’ll go
from there. Grab your incident response book and meet me in the conference room in 10 minutes. Grab Tina in network operations on the way.”
Opening Case Scenario: Pernicious Proxy Probing
2 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 In the weeks that followed the September 11, 2001 attacks in New York, Pennsylvania, and Washington D.C., the media reported on the disastrous losses that various organizations were suffering. Still, many organizations were able to continue conducting business. Why? The reason is that those organizations were prepared for unexpected events. The cataclysm in 2001 was not the first attack on the World Trade Center (WTC). On February 26, 1993, a car bomb exploded beneath one of the WTC towers, killing 6 and injuring over 1000. The attack was limited in its devastation only because the attackers weren’t able to acquire all the components for a coordinated bomb and cyanide gas attack.2
Still, this attack was a wake-up call for the hundreds of organizations that conducted business in the WTC. Many began asking the question, “What would we have done if the attack had been more successful?” As a direct result, many of the organizations occupying the WTC on September 11, 2001 had developed contingency plans. Although thousands of people lost their lives in the attack, many were able to evacuate, and many organizations were prepared to resume their businesses in the aftermath of the devastation.
A 2008 Gartner report found that two out of three organizations surveyed had to invoke their disaster recovery or business continuity plans in the two years preceding the study.3 Consider- ing that nearly 80 percent of businesses affected by a disaster either never reopen or close within 18 months of the event, having a disaster recovery and business continuity plan is vital to sustaining operations when disasters strike.4 Considering the risks, it is imperative that management teams create, implement, and test effective plans to deal with incidents and disasters. For this reason, the field of information security has been steadily growing and is taken seriously by more and more organizations, not only in the United States but throughout the world.
Before we can discuss contingency planning in detail, we must introduce some critical con- cepts of which contingency planning is an integral part. The first of these, which serves as the overall disciplinary umbrella, is information security. This refers to many interlinked programs and activities that work together to ensure the confidentiality, integrity, and availability of the information used by organizations. This includes steps to ensure the protection of organiza- tional information systems, specifically during incidents and disasters. Because information security is a complex subject, which includes risk management as well as information security policy, it is important to have an overview of that broad field and an understanding of these major components. Contingency planning is an important element of information security, but before management can plan for contingencies, it should have an overall strategic plan for information security in place, including risk management processes to guide the appropriate managerial and technical controls. This chapter serves as an overview of information security, with special consideration given to risk management and the role that contingency planning plays in (1) information security in general and (2) risk management in particular.
Information Security The Committee on National Security Systems (CNSS) has defined information security as the protection of information and its critical elements, including the systems and hard- ware that use, store, and transmit that information. This definition is part of the CNSS model (see Figure 1-1), which serves as the conceptual framework for understanding information security. The model evolved from a similar model developed within the
Information Security 3
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
computer security industry, known as the C.I.A. triangle. An industry standard for com- puter security since the development of the mainframe, the C.I.A. triangle illustrates the three most critical characteristics of information used within information systems: confi- dentiality, integrity, and availability.
Information assets have the characteristics of confidentiality when only those persons or com- puter systems with the rights and privileges to access it are able to do so. Information assets have integrity when they are not exposed (while being stored, processed, or transmitted) to corruption, damage, destruction, or other disruption of their authentic states; in other words, the information is whole, complete, and uncorrupted. Finally, information assets have availability when authorized users—persons or computer systems—are able to access them in the specified format without interference or obstruction. In other words, the information is there when it is needed, from where it is supposed to be, and in the format expected.
In summary, information security (InfoSec) is the protection of the confidentiality, integrity, and availability of information, whether in storage, during processing, or in transmission. Such protection is achieved through the application of policy, education and training, and technology.
Key Information Security Concepts In general, a threat is an object, person, or other entity that is a potential risk of loss to an asset, which is the organizational resource being protected. An asset can be logical, such as a Web site, information, or data, or it can be physical, such as a person, com- puter system, or other tangible object. A threat can become the basis for an attack—an intentional or unintentional attempt to cause damage to or otherwise compromise the information or the systems that support it. A threat-agent is a specific and identifiable instance of a general threat that exploits vulnerabilities set up to protect the asset. NIST defines a vulnerability as “a flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or violation of the system’s secu- rity policy.”5 Vulnerabilities that have been examined, documented, and published are referred to as well-known vulnerabilities. Some vulnerabilities are latent and thus not revealed until they are discovered and made known.
Po licy
Ed uc
ati on
Te ch
no log
y Confidentiality
Integrity
Availability
Polic y Edu
catio n Tec
hnolo gy
Storage Processing Transmission
Confidentiality
Integrity
Availability
Storage Processing Transmission © Cengage Learning 2014
Figure 1-1 The CNSS security model
4 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 There are two common uses of the term exploit in information security. First, threat-agents are said to exploit a system or information asset by using it illegally for their personal gains. Second, threat-agents can create an exploit, or means to target a specific vulnerabil- ity, usually found in software, to formulate an attack. A defender tries to prevent attacks by applying a control, a safeguard, or a countermeasure; these terms, all synonymous with control, represent security mechanisms, policies, or procedures that can successfully counter attacks, reduce risk, resolve vulnerabilities, and generally improve the security within an organization.
The results of a 2012 study that collected, categorized, and ranked the identifiable threats to information security are shown in Table 1-1. The study compared its findings with a prior study conducted by one of its researchers.
The threat categories shown in Table 1-1 are explained in detail in the following sections.
Trespass Trespass is a broad category of electronic and human activities that can breach the confidentiality of information. When an unauthorized individual gains access to the information an organization is trying to protect, that act is categorized as a deliber- ate act of trespass. In the opening scenario of this chapter, the IT staff members at HAL were more disappointed than surprised to find someone poking around their mail server, looking for a way in. Acts of trespass can lead to unauthorized real or virtual actions that enable information gatherers to enter premises or systems they have not been autho- rized to enter.
Threat Category 2010 Ranking Prior Ranking
Espionage or trespass 1 4
Software attacks 2 1
Human error or failure 3 3
Theft 4 7
Compromises to intellectual property 5 9
Sabotage or vandalism 6 5
Technical software failures or errors 7 2
Technical hardware failures or errors 8 6
Forces of nature 9 8
Deviations in quality of service from service providers 10 10
Technological obsolescence 11 11
Information extortion 12 12
Table 1-1 Threats to information security6 Source: 2003 Study © Communications of the ACM used with permission
Information Security 5
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
The classic perpetrator of deliberate acts of espionage or trespass is the hacker. In this text, hackers are people who bypass legitimate controls placed on information systems in order to gain access to data or information against the intent of the owner. More specifically, a hacker is someone who uses skill, guile, or fraud to attempt to bypass the controls placed around information that belongs to someone else.
Software Attacks Deliberate software attacks occur when an individual or group designs software to attack a system. This software is referred to as malicious code, mali- cious software, or malware. These software components or programs are designed to damage, destroy, or deny service to the target systems. Some of the more common instances of malicious code are viruses and worms, Trojan horses, logic bombs, bots, rootkits, and back doors. Equally prominent among the recent incidences of malicious code are the denial-of-service attacks conducted by attackers on popular e-commerce sites. A denial-of-service (DoS) attack seeks to deny legitimate users access to services by either tying up a server’s available resources or causing it to shut down. A variation on the DoS attack is the distributed DoS (DDoS) attack, in which an attacker compro- mises a number of systems, then uses these systems (called zombies or bots) to attack an unsuspecting target.
A potential source of confusion when it comes to threats posed by malicious code are the differences between the method of propagation (worm versus virus), the payload (what the malware does once it is in place, such as deny service or install a back door), and the vector of infection (how the code is transmitted from system to system, whether through social engineering or by technical means, such as an open network share). Various concepts related to the topic of malicious code are discussed in the following sections.
Viruses Computer viruses are segments of code that perform malicious actions. The code attaches itself to an existing program and takes control of that program’s access to the targeted computer. The virus-controlled target program then carries out the virus’s plan by replicating itself and inserting itself into additional targeted systems.
Opening an infected e-mail or some other seemingly trivial action can cause anything from random messages popping up on a user’s screen to the destruction of entire hard drives of data. Viruses are passed from machine to machine via physical media, e-mail, or other forms of computer data transmission. When these viruses infect a machine, they may immedi- ately scan the local machine for e-mail applications; they may even send themselves to every user in the e-mail address book.
There are several types of viruses. One type is the macro virus, which is embedded in auto- matically executing macrocode, common in word-processed documents, spreadsheets, and database applications. Another type, the boot virus, infects the key operating systems files located in a computer’s boot sector.
Worms Named for the tapeworm in John Brunner’s novel The Shockwave Rider, worms are malicious programs that replicate themselves constantly without requiring another pro- gram to provide a safe environment for replication. Worms can continue replicating them- selves until they completely fill available resources, such as memory, hard drive space, and
6 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 network bandwidth. These complex behaviors can be invoked with or without the user downloading or executing the file. Once the worm has infected a computer, it can redis- tribute itself to all e-mail addresses found on the infected system. Further, a worm can deposit copies of itself onto all Web servers that the infected system can reach, so that users who subsequently visit those sites become infected themselves. Worms also take advantage of open shares found on the network in which an infected system is located, placing working copies of the worm code onto the server so that users of those shares are likely to become infected.
Back Doors and Trap Doors A virus or worm can have a payload that installs a back door or trap door component in a system, which allows the attacker to access a system, at will, with special privileges. Examples of these kinds of payloads are SubSeven, Back Orifice, and Flashfake.
Polymorphism One of the biggest ongoing problems in fighting viruses and worms are polymorphic threats. A polymorphic threat is one that changes its apparent shape over time, making it undetectable by techniques that look for preconfigured signatures. These viruses and worms actually evolve, changing their size and appearance to elude detection by antivi- rus software programs. This means that an e-mail generated by the virus may not match previous examples, making detection more of a challenge.
Propagation Vectors The way that malicious code is spread from one system to another can vary widely. One common way is through a social engineering attack—that is, getting the computer user to perform an action that enables the infection. An example of this is the Trojan horse, often simply called a Trojan. A Trojan is something that looks like a desirable program or tool but is in fact a malicious entity. Other propagation vectors do not require human interaction, leveraging open network connections, file shares, or software vulnerabil- ities to spread themselves.
Malware Hoaxes As frustrating as viruses and worms are, perhaps more time and money is spent on resolving malware hoaxes. Well-meaning people can disrupt the harmony and flow of an organization when they send random e-mails warning of dangerous malware that is fictitious. While these individuals feel they are helping out by warning their coworkers of a threat, much time and energy is wasted as everyone forwards the message to everyone they know, posts the message on social media sites, and begins updating antivirus protection software. By teaching its employees how to verify whether a malware threat is real, the organization can reduce the impact of this type of threat.
Human Error or Failure This threat category includes acts performed by an authorized user, usually without malicious intent or purpose. When people use information systems, mistakes sometimes happen as a result of inexperience, improper training, incorrect assumptions, and so forth. Unfortunately, small mistakes can produce extensive damage with catastrophic results. This is what is meant by human error. Human failure, on the other hand, is the intentional refusal or unintentional inability to comply with policies, guidelines, and procedures, with a potential loss of information. An organization may be
Information Security 7
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
doing its part to protect information, but if an individual employee fails to follow estab- lished protocols, information can still be put at risk.
Theft The threat of theft—the illegal taking of another’s property—is a constant prob- lem. Within an organization, property can be physical, electronic, or intellectual. The value of information assets suffer when they are copied and taken away without the own- er’s knowledge. This threat category also includes acts of espionage, given that an attacker is often looking for information to steal. Any breach of confidentiality can be construed as an act of theft.
Attackers can use many different methods to access the information stored in an information system. Some information gathering is quite legal—for example, when doing research. Such techniques are collectively referred to as competitive intelligence. When information gathering employs techniques that cross the threshold of what is considered legal or ethical, it becomes known as industrial espionage.
Also of concern in this category is the theft or loss of mobile devices, including phones, tablets, and computers. Although the devices themselves are of value, perhaps even more valu- able is the information stored within. Users who have been issued company equipment may establish (and save) VPN-connection information, passwords, access credentials, company records, customer information, and the like. This valuable information becomes a target for information thieves. In fact, it has become commonplace to find lost or stolen devices in the trash, with the hard drives or data cards (like phone SIMs) removed or the data having been copied and erased The information is more valuable and easier to conceal than the actual device itself.
Users who travel or use their devices away from home should be extremely careful when leav- ing the device unattended at a restaurant table, conference room, or hotel room. Actually, most globally engaged organizations now have explicit policy directives that prohibit taking these portable devices to certain countries and direct employees required to travel to take sanitized, almost disposable, devices that are not allowed contact with internal company net- works or technology.
Compromises to Intellectual Property Many organizations create or support the development of intellectual property as part of their business operations. FOLDOC, an online dictionary of computing, defines intellectual property (IP) this way:
The ownership of ideas and control over the tangible or virtual representation of those ideas. Use of another person’s intellectual property may or may not involve royalty payments or permission but should always include proper credit to the source.7
Source: FOLDOC
IP includes trade secrets, copyrights, trademarks, and patents, all of which employees use to conduct day-to-day business. Once an organization has properly identified its IP, breaches in the controls placed to control access to it constitute a threat to the security of this information.
Often, an organization purchases or leases the IP of other organizations and must therefore abide by the purchase or licensing agreement for its fair and responsible use.
8 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 Of equal concern is the exfiltration, or unauthorized removal of information, from an organization. Most commonly associated with disgruntled employees, the protection of intellectual property from unauthorized disclosure to third parties further illustrates the severity of this issue. Theft of organizational IP, such as trade secrets or trusted informa- tion like customer personal and financial records, is a commonplace issue. Data exfiltration is also being made tougher to combat because of the increasing popularity of “bring your own device” (or BYOD) systems, which allow employees to attach their own personal devices to the corporate network. These devices are frequently not as secure as the systems owned and maintained by the organization. If compromised by attackers prior to attaching to the corporate network, BYOD systems can easily be used as conduits to allow data to be exfiltrated. Additionally, unhappy employees can use these devices to copy data, then leave the organization with that valuable asset in their hands and no one the wiser.
Among the most common IP breaches is the unlawful use or duplication of software-based intellectual property, more commonly known as software piracy. Because most software is licensed to a particular purchaser, its use is restricted to a single user or to a designated user in an organization. If the user copies the program to another computer without securing another license or transferring the license, he or she has violated the copyright. Software licenses are strictly enforced by a number of regulatory and private organizations, and soft- ware publishers use several control mechanisms to prevent copyright infringement. In addition to the laws surrounding software piracy, two watchdog organizations investigate allegations of software abuse: the Software & Information Industry Association (SIIA), the Web site for which can be found at www.siia.net, and the Business Software Alliance (BSA), which can be found at www.bsa.org.
Sabotage or Vandalism This threat category involves the deliberate sabotage of a computer system or business or acts of vandalism to either destroy an asset or damage an organization’s image. The acts can range from petty vandalism by employees to organized sabotage by outsiders. A frequently encountered threat is the assault on an organization’s electronic profile—its Web site.
A much more sinister form of hacking is cyberterrorism. Cyberterrorists hack systems to conduct terrorist activities through network or Internet pathways. The United States and other governments are developing security measures intended to protect the critical computing and communications networks as well as the physical and power utility infrastructures.
Technical Software Failures or Errors This threat category stems from purchasing software with unknown hidden faults. Large quantities of computer code are written, pub- lished, and sold before all the significant security-related bugs are detected and resolved. Also, combinations of particular software and hardware may reveal new bugs. While most bugs are not a security threat, some may be exploitable and may result in potential loss or damage to information used by those programs. In addition to bugs, there may be untested failure conditions or purposeful subversions of the security controls built into systems. These may be oversights or intentional shortcuts left by programmers for benign or malign rea- sons. Collectively, shortcut access routes into programs that bypass security checks are called trap doors; they can cause serious security breaches.
Information Security 9
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Software bugs are so commonplace that entire Web sites are dedicated to documenting them—for example, Bugtraq (www.securityfocus.com) and the National Vulnerability Data- base (http://nvd.nist.gov). These resources provide up-to-the-minute information on the latest security vulnerabilities and a very thorough archive of past bugs.
Technical Hardware Failures or Errors Technical hardware failures or errors occur when a manufacturer distributes equipment containing a known or unknown flaw. These defects can cause the system to perform outside of expected parameters, resulting in unreliable service or lack of availability. Some errors are terminal, in that they result in the unrecoverable loss of the equipment. Some errors are intermittent, in that they only periodi- cally manifest themselves, resulting in faults that are not easily identified. For example, equipment can sometimes stop working or can work in unexpected ways. Murphy’s Law says that if something can possibly go wrong, it will. In other words, it’s not whether some- thing will fail but when.
Forces of Nature Forces of nature, also known as force majeure, or acts of God, pose some of the most dangerous threats imaginable because they often occur with very little warn- ing. Fire, flood, earthquake, lightning, volcanic eruptions, even animal or insect infestation— these threats disrupt not only the lives of individuals but also the storage, transmission, and use of information.
Deviations in Quality of Service by Service Providers This threat category covers situations in which a product or service is not delivered to the organization as expected. Utility companies, service providers, and other value-added organizations form a vast web of interconnected services. An organization’s information system depends on the successful operation of such interdependent support systems, including power grids, telecom networks, parts suppliers, service vendors, and even the janitorial staff and garbage haulers. Any one of these support systems can be interrupted by storms, employee illnesses, or other unforeseen events.
An example of this threat category occurs when a construction crew damages a fiber-optic link for an ISP. The backup provider may be online and in service but may only be able to supply a fraction of the bandwidth the organization needs for full service. This degradation of service is a form of availability disruption. Internet service, communications, and power irregularities can dramatically affect the availability of information and systems.
Technological Obsolescence This threat category involves antiquated or outdated infrastructure that leads to unreliable and untrustworthy systems. Management must recog- nize that when technology becomes outdated, there is a risk of a loss of data integrity from attacks. Strategic planning should always include an analysis of the technology that is currently in use. Ideally, proper planning will prevent the risks stemming from technology obsolesce, but when obsolescence is identified, management must take immediate action. IT professionals play a large role in the identification of obsolescence.
Information Extortion The threat of information extortion is the possibility that an attacker or trusted insider will steal information from a computer system and demand compensation for its return or for an agreement to not disclose the information. Extortion
10 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 is common in credit card number theft. Unfortunately, organized crime is increasingly involved in this area.
Other Threats Listings The Computer Security Institute conducts an annual study of computer crime, the results for which are shown in Table 1-2. Malware attacks continue to cause the most financial loss, and malware continues to be the most frequently cited attack (with a reported loss of over $42 million in 2009 alone). Nearly 70 percent of respondents noted that they had experienced one or more malware attacks in the 12-month reporting period—and that doesn’t include companies that are unwilling to report attacks. The fact is, almost every company has been attacked. Whether or not that attack was successful depends on the company’s security efforts.
Type of Attack or Misuse 2010/11 2008 2006 2004 2002 2000
Malware infection (revised after 2008) 67% 50% 65% 78% 85% 85%
Being fraudulently represented as sender of phishing message
39% 31% (new category)
Laptop/mobile hardware theft/loss 34% 42% 47% 49% 55% 60%
Bots/zombies in organization 29% 20% (new category
Insider abuse of Internet access or e-mail 25% 44% 42% 59% 78% 79%
Denial of service 17% 21% 25% 39% 40% 27%
Unauthorized access or privilege escalation by insider
13% 15% (revised category)
Password sniffing 11% 9% (new category)
System penetration by outsider 11% (revised category)
Exploit of client Web browser 10% (new category)
Other Attacks/Misuse categories with less than 10% responses not listed above include (listed in decreasing order of occurrence/reporting):
Financial fraud
Web site defacement
Exploit of wireless network
Other exploit of public-facing Web site
Theft of or unauthorized access to PII or PHI due to all other causes
Instant Messaging misuse
Theft of or unauthorized access to IP due to all other causes
Exploit of user’s social network profile
Theft of or unauthorized access to IP due to mobile device theft/loss
Theft of or unauthorized access to PII or PHI due to mobile device theft/loss
Exploit of DNS Server
Extortion or blackmail associated with threat of attack or release of stolen data
Table 1-2 Top Ten CSI/FBI survey results for types of attack or misuse (2000-2011)8 Source CSI/FBI surveys 2000 to 2010/11 (www.gocsi.com)
Information Security 11
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Overview of Risk Management One part of information security is risk management, which is the process of identifying and controlling the risks to an organization’s information assets. All managers are expected to play a role in the risk management process, but information security managers are expected to play the largest roles. Very often, the chief information officer (CIO) will delegate much of the responsibility for risk management to the chief information security officer (CISO).
Given that contingency planning is considered part of the risk management process, it is important to fully understand how risk management works and how contingency planning fits within that process. Risk management consists of two major undertakings: risk identifica- tion and risk control. Risk identification is the process of examining, documenting, and asses- sing the security posture of an organization’s information technology and the risks it faces. Risk control is the process of applying controls to reduce the risks to an organization’s data and information systems. The various components of risk management and their relationships to one another are shown in Figure 1-2.
As an aspiring information security professional, you will have a key role to play in risk management. As part of the management team within an organization’s management, you may find yourself on the team that must structure the IT and information security func- tions to perform a successful defense of the organization’s information assets—the infor- mation and data, hardware, software, procedures, and people. The IT community must serve the information technology needs of the broader organization and, at the same
Inventorying assets
Classifying assets
Identifying threats & vulnerabilities
Risk controlRisk identification
Selecting strategy
Justifying controls
Risk assessment is the documented result of
the risk identification process.
Risk management
© Cengage Learning 2014
Figure 1-2 Components of risk management
12 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 time, leverage the special skills and insights of the information security community. The information security team must lead the way with skill, professionalism, and flexibility as it works with the other communities of interest to appropriately balance the usefulness and security of the information system.
Looked at another way, risk management is the process of identifying vulnerabilities in an organization’s information systems and taking carefully reasoned steps to ensure the confi- dentiality, integrity, and availability of all the components of the organization’s information system. Each of the three elements in the C.I.A. triangle is an essential part of an organiza- tion’s ability to sustain long-term competitiveness. When the organization depends on IT- based systems to remain viable, information security and the discipline of risk management move beyond theoretical discussions and become an integral part of the economic basis for making business decisions. These decisions are based on trade-offs between the costs of apply- ing information systems controls and the benefits realized from the operation of secured, avail- able systems.
An observation made over 2400 years ago by Chinese General Sun Tzu is relevant to informa- tion security today:
If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.9
Source: Oxford University Press
Consider for a moment the similarities between information security and warfare. Information security managers and technicians are the defenders of information. The many threats mentioned earlier are constantly attacking the defenses surrounding information assets. Defenses are built in layers, by placing safeguard upon safeguard. You attempt to detect, prevent, and recover from attack after attack after attack. Moreover, organizations are legally prevented from switching to offense, and the attackers themselves have no need to expend their resources on defense. To be victorious, you must therefore know yourself and know the enemy.
Know Yourself First, you must identify, examine, and understand the information and systems currently in place within your organization. To protect assets, which are defined here as informa- tion and the systems that use, store, and transmit information, you must understand what they are, how they add value to the organization, and to which vulnerabilities they are susceptible. Once you know what you have, you can identify what you are already doing to protect it. Just because you have a control in place to protect an asset does not necessarily mean that the asset is protected. Frequently, organizations implement control mechanisms but then neglect to periodically perform the necessary review, revision, and maintenance of their own systems. The policies, education and training programs, and technologies that protect information must be carefully maintained and administered to ensure that they are still effective.
Overview of Risk Management 13
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Know the Enemy Once you are informed of your organization’s assets and weaknesses, you can move on to the other part of Sun Tzu’s advice: know the enemy. This means identifying, examining, and understanding the threats facing the organization. You must determine those threat aspects that most directly affect the organization and the security of the organization’s information assets. You can then use your understanding of these aspects to create a list of threats priori- tized by how important each asset is to the organization.
It is essential that all stakeholders conduct periodic management reviews. The first focus of management review is asset inventory. On a regular basis, management must verify the completeness and accuracy of the asset inventory. In addition, organizations must review and verify the threats and vulnerabilities that have been identified as dangerous to the asset inventory, as well as the current controls and mitigation strategies. The cost effectiveness of each control should be reviewed as well and the decisions on deployment of controls revi- sited. Furthermore, managers at all levels must regularly verify the ongoing effectiveness of every control that’s been deployed. For example, a sales manager might assess control proce- dures by going through the office before the workday starts and picking up all the papers from every desk in the sales department. When the workers show up, the manager could inform them that a fire drill is underway—that all their papers have been destroyed and that each worker must now follow the disaster recovery procedures. The effectiveness of the pro- cedures can then be assessed and corrections made.
Risk Identification A risk management strategy calls on information security professionals to identify, classify, and prioritize the organization’s information assets. Once that has been done, the threat iden- tification process begins. Each information asset is examined to identify vulnerabilities, and when vulnerabilities are found, controls are identified and assessed regarding their capability to limit possible losses should an attack occur. The components of this process are shown in Figure 1-3.
Asset Identification and Value Assessment The iterative process of identifying assets and assessing their value begins with the identification of the elements of an orga- nization’s systems: people, procedures, data/information, software, hardware, and net- works. The assets are then classified and categorized, with details added as the analysis goes deeper.
Information Asset Classification In addition to identifying the assets, it is advisable to classify them with respect to their security needs. For example, data could be classified as confi- dential data, internal data, and public data. Likewise, the individuals authorized to view the data could be classified using a personnel security clearance structure.
No matter how an organization chooses to classify the components of its system, the com- ponents must be specific enough to allow the creation of various priority levels. The com- ponents then can be ranked according to criteria established by the categorization. The categories themselves should be comprehensive and mutually exclusive. Comprehensive means that all the information assets should fit in the list somewhere; mutually exclusive means that each information asset should fit in only one category. For example, when
14 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1
using a purely technical standard to classify a certificate authority used in a PKI system, an analysis team could categorize the certificate authority in the asset list as software but within the software category as either an application or a security component. It is a mat- ter of professional judgment. To add consistency and simplify the categorization of ele- ments when there is ambiguity, it is essential to establish a clear and comprehensive set of categories.
Information Asset Valuation As each asset is assigned to a category, the following questions should be asked:
● Is this asset the most critical to the organizations’ success? ● Does it generate the most revenue? ● Does it generate the most profit? ● Would it be the most expensive to replace? ● Will it be the most expensive to protect? ● If revealed, would it cause the most embarrassment or greatest damage? Does the law
or other regulation require us to protect this asset?
Risk identification
Risk assessment
Plan and organize the process.
Categorize system components.
Inventory and categorize assets.
Identify threats.
Specify vulnerable assets.
Assign value to attack on assets.
Assess likelihood of attack on
vulnerabilities.
Calculate relative risk factor for assets.
Review possible controls.
Document findings.
© Cengage Learning 2014
Figure 1-3 Components of risk identification
Overview of Risk Management 15
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
The answers to these questions help determine the weighting criteria used for information asset valuation and information impact evaluation. Before beginning the inventory process, the organization should decide which criteria are best suited to establish the value of the information assets.
In addition to the criteria just listed, company-specific criteria should be identified, documen- ted, and added to the process. To finalize this step of the information asset identification pro- cess, the organization should assign a weight to each asset based on the answers to the vari- ous questions.
Once the process of inventorying and assessing value is complete, you can calculate the rela- tive importance of each asset using a straightforward process known as weighted factor anal- ysis, which is shown in Table 1-3. In this process, each information asset is assigned a score for each critical factor. In the example shown, these scores may range from 0.1 to 1.0 In addition, each criterion is assigned a weight (ranging from 1 to 100) to show its assigned importance for the organization.
Data Classification and Management Corporate and military organizations use a variety of data classification schemes, which are procedures that require organizational data to be classified into mutually exclusive categories based on the need to protect the confidenti- ality of each category of data. For example, at one time Georgia-Pacific, an American pulp and paper company, used a data classification scheme in which information owners through- out the company were expected to classify the information assets for which they were respon- sible. At least once a year, they would review these classifications to ensure that the informa- tion was still classified correctly and the appropriate access controls were in place.
The military has specialized classification ratings ranging from “Public” to “For Official Use Only” to “Confidential“ to “Secret” to “Top Secret.” Most organizations do not need the detailed level of classification used by the military or federal agencies, but most organizations may find it necessary to classify their data to provide protection. A simple classification scheme would allow an organization to protect such sensitive information as its marketing or
Information Asset
Criterion 1: Impact on Revenue
Criterion 2: Impact on Profitability
Criterion 3: Impact on Image
Weighted Score
Criterion Weight (1–100 must total 100) 30 40 30
EDI Document Set 1—Logistics BOL to outsourcer (outbound)
0.8 0.9 0.5 75
EDI Document Set 2—Supplier orders (outbound) 0.8 0.9 0.6 78
EDI Document Set 2—Supplier fulfillment advice (inbound)
0.4 0.5 0.3 41
Customer order via SSL (inbound) 1.0 1.0 1.0 100
Customer service request via e-mail (inbound) 0.4 0.4 0.9 55
Table 1-3 A weighted factor analysis worksheet © Cengage Learning 2014
16 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 research data, its personnel data, its customer data, and its general internal communications. Alternatively, a scheme such as the following could be adopted:
● Public—Information for general public dissemination, such as an advertisement or public release
● For Official Use Only—Information that is not particularly sensitive but is not for public release, such as internal communications
● Sensitive—Information important to the business that could embarrass the company or cause loss of market share if revealed
● Classified—Information of the utmost secrecy to the organization, disclosure of which could severely affect the well-being of the organization
As mentioned earlier, personnel can also be classified with respect to information security, resulting in various levels of security clearance. In organizations that require security clear- ances, each user of data is assigned an authorization level that indicates the data he or she is authorized to view. This is usually accomplished by assigning each employee a named role— such as data entry clerk, development programmer, information security analyst, or even CIO—and a security clearance associated with that role. Overriding one’s security clearance, however, is the fundamental principle of need to know. Employees are not simply allowed to view any and all data that falls within their level of clearance. Before someone can access a specific set of data, the need-to-know requirement must be met. This extra level of protection ensures that the confidentiality of information is properly maintained.
Threat Identification After identifying and performing a preliminary classification of an organization’s information assets, the analysis phase moves to an examination of the threats facing the organization. An organization faces a wide variety of threats; the realistic ones need to be investigated further, while the unimportant threats are set aside. Otherwise, the project’s scope can overwhelm the organization’s ability to plan.
Each of the threat categories identified in Table 1-1 must be assessed regarding its potential to endanger the organization. This is known as a threat assessment. Each threat can be assessed using a few basic questions:
● Which threats present a danger to the organization’s assets in the given environment? ● Which threats represent the most danger to the organization’s information? ● Which threats would cost the most to recover from if there was an attack? ● Which threats require the greatest expenditure to prevent?
By answering these questions, you can establish a framework for discussing threat assessment. The list may not cover everything, however. If an organization has specific guidelines or poli- cies, these may require the posing of additional questions. The list is easily expanded to include additional requirements.
Vulnerability Identification Once you have identified the organization’s information assets and documented some criteria for assessing the threats they face, you should review each information asset and each threat it faces to create a list of vulnerabilities. You should then examine how each of the threats could be perpetrated. Finally, you should list the organization’s assets and its vulnerabilities. The list shows all the vulnerabilities of all the
Overview of Risk Management 17
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
information assets and can be quite long. Some threats manifest themselves in multiple ways, yielding multiple vulnerabilities for that threat. The process of listing vulnerabilities is somewhat subjective and draws on the experience and knowledge of the people creating the list. Therefore, it works best when groups of people with diverse backgrounds work itera- tively in a series of brainstorming sessions. For instance, the team that reviews the vulner- abilities for networking equipment should include the networking specialists, the systems management team that operates the network, the information security risk specialist, and even technically proficient users of the system.
At the end of the risk identification process, you will have a list of all the information assets and their respective vulnerabilities. This list, along with any supporting documentation, is the starting point for the next step, risk assessment.
Risk Assessment Now that you have identified the organization’s information assets and the threats and vul- nerabilities of those assets, it’s time to assess the relative risk for each vulnerability. This is accomplished through a process called risk assessment. Risk assessment assigns a risk rating or score to each information asset. Although this number does not mean anything in absolute terms, it is useful in gauging the relative risk to each vulnerable information asset and facili- tates the development of comparative ratings later in the risk control process. Figure 1-4 shows the factors that go into the risk-rating estimate for each of the vulnerabilities.
The goal at this point is to create a method for evaluating the relative risk of each of the listed vulnerabilities. There are many detailed methods for determining accurate and detailed costs of each of the vulnerabilities. Likewise, there are models that can be used to estimate expenses for the variety of controls that can be used to reduce the risk for each vulnerability. However, it is often more useful to use a simpler risk model (such as the one shown in Figure 1-4) to evaluate the risk for each information asset. The following sections pres- ent the factors used to calculate the relative risk for each vulnerability.
Likelihood The probability that a specific vulnerability within an organization will be successfully attacked is referred to as likelihood.10 In risk assessment, you assign a numeric value to the likelihood of a vulnerability being successfully exploited. A likelihood
Risk is the likelihood of the occurrence of a vulnerability
multiplied by the value of the information asset
minus the percentage of risk mitigated by current controls
plus the uncertainty of current knowledge of the vulnerability.
© Cengage Learning 2014
Figure 1-4 Factors of risk
18 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 vulnerability could be assigned a number between 0.1 (for low) and 1.0 (for high), or it could be assigned a number between 1 and 100, but 0 is not used because vulnerabilities with a zero likelihood have been removed from the asset/vulnerability list. Whatever rating system is used, you should bring all your professionalism, experience, and judgment to bear, and you should use the rating model you selected consistently. Whenever possible, use external references for likelihood values that have been reviewed and adjusted for your spe- cific circumstances.
Many asset/vulnerability combinations have sources for determining their likelihoods. For example, the likelihood of a fire has been actuarially estimated for each type of structure (such as a building). Likewise, the likelihood that a given e-mail contains a virus or worm has been researched. Finally, the number of network attacks can be forecast based on how many network addresses the organization has been assigned.
Valuation of Information Assets Using the information obtained during the infor- mation asset identification phases, you can assign weighted scores for the value to the organi- zation of each information asset. The actual numbers used can vary with the needs of the organization. Some groups use a scale of 1 to 100, with “100” reserved for those information assets that, if lost, would cause the company to stop operations within a few minutes. Other scales assign weights in broad categories, assigning all critical assets a value of 100, all low- critical assets a value of 1, and all others a value of 50. Still other groups use a scale of 1 to 10 or assigned values of 1, 3, and 5 to represent low-valued, medium-valued, and high-valued assets. You can also create weight values for your specific needs. To be effective, the values must be assigned by asking the questions described in the “Threat Identification” section.
After re-asking these questions, you should use the background information from the risk identification process to pose one additional question: Which of these questions is most important to the protection of the organization’s information? This helps you set priorities in the assessment of vulnerabilities. Additional questions may also be asked. Again, you are looking at threats the organization faces in its current state; however, this information will be valuable in later stages as you begin to design the security solution. Once these questions are answered, you move to the next step in the process: examining how current controls can reduce the risk faced by specific vulnerabilities.
If a vulnerability is fully managed by an existing control, it no longer needs to be considered for additional controls and can be set aside. If it is partially controlled, you need to estimate what percentage of the vulnerability has been controlled.
It is impossible to know everything about each vulnerability, such as how likely it is to occur or how great an impact a successful attack would have. The degree to which a current control can reduce risk is also subject to estimation error. You must apply judg- ment when adding factors into the equation to allow for an estimation of the uncer- tainty of the information.
Risk Determination For the purpose of making relative risk assessments, we can say that risk equals the likelihood of a vulnerability occurring times the value (or impact) of that asset to the organization minus the percentage of risk that is already being con- trolled plus an element of uncertainty. For example, consider an information asset A that has a value of 50 and one vulnerability with a likelihood of 1.0 and no current controls; furthermore, it’s estimated that the assumptions and data are 90 percent
Overview of Risk Management 19
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
accurate (that is, there’s a 10 percent uncertainty). Therefore, asset A’s vulnerability is rated as 55, which is derived from the following calculation:
(50 [being the value] × 1.0 [being the likelihood of occurrence]) – 0 percent [being the percent of risk currently controlled] + 10 percent [being the uncertainty of our assumptions]
Or, using just numbers:
55 = (50 × 1.0) – ((50 × 1.0) × 0.0) + ((50 × 1.0) × 0.1)
55 = 50 – 0 + 5
Qualitative Risk Management Now that this formula has been carefully explained, you need to keep in mind that virtually every number used in it has been estimated by someone, somewhere. Insurance companies may have reliable values for physical disasters (fire, floods, etc.), but a different approach may be preferred when considering the substantial portion of an organization’s budget that goes for informa- tion security as well as the budget for IR, DR, and BC planning and preparation. Some organizations prefer more qualitative approaches in which more general categories and ranking are used to evaluate risk. One such approach—the Factor Anal- ysis of Information Risk (FAIR) strategy promoted by CXOWARE, a company focusing on enterprise risk management. (http://riskmanagementinsight.com)—is flexible yet robust.
For each threat and its associated vulnerabilities that have residual risk, you need to create a preliminary list of control ideas. Residual risk is the risk that remains to the information asset even after the existing control has been applied.
Identify Possible Controls Controls, safeguards, and countermeasures are terms used to represent security mechanisms, policies, and procedures that reduce the risk of operating information systems. The three general categories of controls, according to the CNSS model discussed earlier, are policies, programs (education and training), and technologies.
Policies are documents that specify an organization’s approach to security. There are three types of security policies: the enterprise information security policy, issue-specific policies, and systems-specific policies. The enterprise information security policy is an executive-level docu- ment that outlines the organization’s approach and attitude toward information security and relates to the strategic value of information security within the organization. This document, typically created by the CIO in conjunction with the CEO and CISO, sets the tone for all sub- sequent security activities. Issue-specific policies address the specific implementations or appli- cations of which users should be aware. These policies are typically developed to provide detailed instructions and restrictions associated with security issues. Examples include policies for Internet use, e-mail, and access to the building. Finally, systems-specific policies address the particular use of certain systems. This could include firewall configuration policies, systems access policies, and other technical configuration areas.
Programs are activities performed within the organization to improve security. These include security education, training, and awareness programs. Security technologies are implementa- tions of the policies defined by the organization using technology-based mechanisms, such as firewalls or intrusion detection systems.
20 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 Risk Control Strategies When management has determined that the risks from information security threats are unac- ceptable, or when laws and regulations mandate such action, they empower the information technology and information security communities of interest to control the risks. Once the project team for information security development has created the ranked vulnerability work- sheet, it must choose one of the following five approaches for controlling the risks that result from the vulnerabilities:
● Defense ● Transferal ● Mitigation ● Acceptance ● Termination
Defense The defense approach attempts to prevent the exploitation of the vulnerability. This is the preferred approach and is accomplished by means of countering threats, remov- ing vulnerabilities in assets, limiting access to assets, and adding protective safeguards. This approach is sometimes referred to as avoidance.
There are three common methods of risk defense: defense through application of policy, defense through application of training and education programs, and defense through applica- tion of technology. The application of policy allows management to mandate that certain pro- cedures are always followed. For example, if the organization needs to control password use more tightly, a policy requiring passwords on all IT systems can be implemented. Note that policy alone may not be enough and that effective management always couples changes in policy with training and education and/or the application of technology. Policy must be com- municated to employees. In addition, new technology often requires training. Awareness, training, and education are essential if employees are to exhibit safe and controlled behavior.
In the real world of information security, technical solutions are usually required to assure that risk is reduced. To continue the earlier example, system administrators may not configure systems to use passwords unless required by policy. Without the policy to mandate the use of passwords, the system administrator may choose not to implement them.
Risks may be avoided by countering the threats facing an asset or by eliminating the exposure of a particular asset. Eliminating the risk posed by a threat is virtually impossible, but it is possible to reduce the risk to an acceptable level. Another method of risk management that falls under the defense category is the implementation of security controls and safeguards to deflect attacks on systems and therefore minimize the probability that an attack will be successful. An organization with an FTP access vulnerability, for example, may choose to implement a control or safeguard for that service, or the organization may choose to eliminate the FTP service to avoid the potential risk.
Transferal The transferal approach attempts to shift the risk to other assets, other pro- cesses, or other organizations. This may be accomplished through rethinking how services are offered, revising deployment models, outsourcing to other organizations, purchasing insurance, or implementing service contracts with providers.
Overview of Risk Management 21
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
When an organization does not have the correct balance of information security skills, it should consider hiring or making outsourcing arrangements with individuals or firms that provide such expertise. This allows the organization to transfer the risks associated with the management of these complex systems to another organization that has experience in dealing with those risks. A side benefit of specific contract arrangements is that the provider is respon- sible for disaster recovery and, through service-level agreements, can be made responsible for guaranteeing server and Web site availability.
However, outsourcing is not without its own risks. It is up to the owner of the information asset, IT management, and the information security team to ensure that the disaster recovery requirements of the outsourcing contract are sufficient and have been met before they are needed for recovery efforts. If the outsourcer fails to meet the contract terms, the consequences may be far worse than expected.
Mitigation The mitigation approach attempts to reduce the impact caused by the exploitation of vulnerability through planning and preparation. This approach includes contingency planning and its four functional components: the business impact analysis, the incident response plan, the disaster recovery plan, and the business continuity plan. Each of these components of the contingency plan depends on the ability to detect and respond to an attack as quickly as possible and relies on the existence and quality of the other plans. Mitigation begins with the early detection that an attack is in progress and the ability of the organization to respond quickly, efficiently, and effectively. Each of these is described later in this chapter and explored in depth in later chapters of the book.
Acceptance Acceptance is the choice to do nothing to protect an information asset and to accept the outcome of its potential exploitation. This may or may not be a conscious business decision. The only industry-recognized valid use of this strategy occurs when the organization has done the following:
● Determined the level of risk ● Assessed the probability of attack ● Estimated the potential damage that could occur from an attack ● Performed a thorough cost-benefit analysis ● Evaluated controls using each appropriate type of feasibility ● Decided that the particular function, service, information, or asset did not justify the
cost of protection
This control, or rather lack of control, is based on the conclusion that the cost of protecting an asset does not justify the security expenditure. In this case, management may be satisfied with taking its chances and saving the money that would normally be spent on protecting this asset. If every vulnerability identified in the organization is handled through acceptance, it may reflect an organization’s inability to conduct proactive security activities and an apa- thetic approach to security in general.
Termination Like acceptance, termination is based on the organization’s need or choice to leave an asset unprotected. Here, however, the organization does not wish the informa- tion asset to remain at risk and so removes it from the environment that represents risk. Sometimes, the cost of protecting an asset outweighs its value. In other cases, it may be too
22 Chapter 1 An Overview of Information Security and Risk Management
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
1 difficult or expensive to protect an asset, compared to the value or advantage that asset offers the company. In either case, termination must be a conscious business decision, not simply the abandonment of an asset, which would technically qualify as acceptance.
Contingency Planning and Its Components A key role for all managers is planning. Managers in IT in general and information security in particular usually provide strategic planning for an organization to ensure the continuous availability of information systems. Unfortunately for managers, the probability that some form of damaging event will occur, whether it be from inside or outside, intentional or acci- dental, human or nonhuman, annoying or catastrophic, is very high. Thus, managers from each community of interest within the organization must be ready to act when a successful attack occurs.
There are various types of plans for events of this type, and they all fall under the general def- inition of contingency planning. A contingency plan is used to anticipate, react to, and recover from events that threaten the security of information and information assets in the organiza- tion; it is also used to restore the organization to normal modes of business operations.
Contingency planning (CP) typically involves four subordinate functions:
● Business impact analysis (BIA) ● Incident response planning (IRP) ● Disaster recovery planning (DRP) ● Business continuity planning (BCP)
Each of these is described in the following sections and discussed in greater detail in later chapters. You will notice that contingency planning has many similarities with the risk man- agement process. The contingency plan is a microcosm of risk management activities, and it focuses on the specific steps required to return all information assets to the level at which they were functioning before the incident or disaster. As a result, the planning process closely emulates the process of risk management.
Business Impact Analysis The entire planning process begins with an assessment of the risks associated with these contingencies. The first function in the development of the CP process is the business impact analysis (BIA). A BIA is an investigation and assessment of the impact that various attacks can have on the organization. The BIA takes up where the risk assessment process leaves off. It begins with the prioritized list of threats and vulnerabilities identified in the risk management process and adds critical information. The BIA is a crucial component of the initial planning stages, as it provides detailed scenarios of the potential impact each attack could have on the organization.
Incident Response Plan The actions an organization can, and perhaps should, take while an incident is in progress are defined in a document referred to as the incident response plan (IR plan). An incident is any clearly identified attack on the organization’s information assets that would threaten the
Contingency Planning and Its Components 23
Copyright 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
assets’ confidentiality, integrity, or availability. The IR plan deals with identifying, classifying, responding to, and recovering from an incident. It provides answers to questions victims might pose in the midst of an incident, such as “What do I do now?” In this chapter’s opening sce- nario, the IT organization was ready to respond to the events that had alerted JJ to an unusual situation. There, a simple process was used, based on documented procedures that were prepared in advance. Another example would be a systems administrator who notices that someone is copying information from the server without authorization, signaling a violation of policy by a potential hacker or unauthorized employee. What should the administrator do first? Whom should be contacted? What should be documented? The IR plan supplies the answers.
In the event of a serious virus or worm outbreak, the IR plan may be used to assess the likelihood of imminent damage and to inform key decision makers in the various communities of interest (IT, information security, organization management, and users). The IR plan also enables the organization to take coordinated action that is either predefined and specific or ad hoc and reactive. The intruders who, in some instances, cause these incidents, constantly look for new weaknesses in operating systems, network services, and protocols.
According to a report released by the Software Engineering Institute at Carnegie Mellon Univer- sity, “[Intruders] actively develop and use sophisticated programs to rapidly penetrate systems. As a result, intrusions, and the damage they cause, are often achieved in a matter of seconds.”11
Another report released by the Software Engineering Institute states that organizations “will not know what to do in the event of an intrusion if the necessary procedures, roles, and responsibilities have not been defined and exercised in advance.” The absence of such proce- dures, the report adds, can lead to the following:
● Extensive damage to data, systems, and networks due to not taking timely action to contain an intrusion. This can result in increased costs, loss of productivity, and loss of business.
● The possibility of an intrusion affecting multiple systems both inside and outside your organization because staff did not know who else to notify and what addi- tional actions to take
● Negative exposure in the news media that can damage your organization’s stature and reputation with your shareholders, your customers, and the community at large
● Possible legal liability and prosecution for failure to exercise an adequate standard of due care when your systems are inadvertently or intentionally used to attack others.12
Source: Carnegie Mellon University