Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Computer security ethics viruses and privacy worksheet answers

03/12/2021 Client: muhammad11 Deadline: 2 Day

Computer Ethics Assignment

�F O U R T H E D I T I O N ETHICS AND

TECHNOLOGY Controversies, Questions, and Strategies

for Ethical Computing

HERMAN T. TAVANI Rivier University

FFIRS3GXML 10/20/2012 0:58:24 Page 2

VP & Executive Publisher: Donald Fowley Executive Editor: Beth Lang Golub Editorial Assistant: Katherine Willis Marketing Manager: Chris Ruel Marketing Assistant: Marissa Carroll Associate Production Manager: Joyce Poh Production Editor: Jolene Ling Designer: Kenji Ngieng Cover Photo Credit: Bernhard Lang/Getty Images, Inc. Production Management Services: Thomson Digital

This book was set in 10/12 TimesTenLTStd-Roman by Thomson Digital, and printed and bound by Edwards Brothers Malloy. The cover was printed by Edwards Brothers Malloy.

This book is printed on acid free paper.

Founded in 1807, John Wiley & Sons, Inc. has been a valued source of knowledge and understanding for more than 200 years, helping people around the worldmeet their needs and fulfill their aspirations. Our company is built on a foundation of principles that include responsibility to the communities we serve and where we live and work. In 2008, we launched a Corporate Citizenship Initiative, a global effort to address the environmental, social, economic, and ethical challenges we face in our business. Among the issues we are addressing are carbon impact, paper specifications and procurement, ethical conduct within our business and among our vendors, and community and charitable support. For more information, please visit our website: www.wiley.com/go/citizenship.

Copyright# 2013, 2011, 2007, 2004 JohnWiley & Sons, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976United States CopyrightAct, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc. 222 Rosewood Drive, Danvers, MA 01923, website www.copyright.com. Requests to the Publisher for permission should be addressed to the PermissionsDepartment, JohnWiley& Sons, Inc., 111 River Street, Hoboken, NJ 07030- 5774, (201)748-6011, fax (201)748-6008, website http://www.wiley.com/go/permissions.

Evaluation copies are provided to qualified academics and professionals for review purposes only, for use in their courses during the next academic year. These copies are licensed and may not be sold or transferred to a third party. Upon completion of the review period, please return the evaluation copy to Wiley. Return instructions and a free of charge return mailing label are available at www.wiley.com/go/returnlabel. If you have chosen to adopt this textbook for use in your course, please accept this book as your complimentary desk copy. Outside of the United States, please contact your local sales representative.

Library of Congress Cataloging-in-Publication Data

Tavani, Herman T. Ethics and technology : controversies, questions, and strategies for ethical

computing / Herman T. Tavani, Rivier University—Fourth edition. pages cm

Includes bibliographical references and index. ISBN 978-1-118-28172-7 (pbk.)

1. Computer networks—Moral and ethical aspects. I. Title. TK5105.5.T385 2013 175—dc23

2012028589

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1

http://www.wiley.com/go/permissions
http://www.wiley.com/go/citizenship
http://www.copyright.com
http://www.wiley.com/go/returnlabel
FFIRS3GXML 10/20/2012 0:58:24 Page 3

In memory of my grandparents, Leon and Marian (Roberts) Hutton,

and Antonio and Clelia (Giamberardino) Tavani

FFIRS3GXML 10/20/2012 0:58:24 Page 4

FTOC3GXML 10/20/2012 1:3:1 Page 5

� CONTENTS AT A GLANCE

PREFACE xvii

ACKNOWLEDGMENTS xxvii

FOREWORD xxix

CHAPTER 1. INTRODUCTION TO CYBERETHICS: CONCEPTS, PERSPECTIVES, AND METHODOLOGICAL FRAMEWORKS 1

CHAPTER 2. ETHICAL CONCEPTS AND ETHICAL THEORIES: ESTABLISHING AND JUSTIFYING A MORAL SYSTEM 33

CHAPTER 3. CRITICAL REASONING SKILLS FOR EVALUATING DISPUTES IN CYBERETHICS 74

CHAPTER 4. PROFESSIONAL ETHICS, CODES OF CONDUCT, AND MORAL RESPONSIBILITY 101

CHAPTER 5. PRIVACY AND CYBERSPACE 131

CHAPTER 6. SECURITY IN CYBERSPACE 174

CHAPTER 7. CYBERCRIME AND CYBER-RELATED CRIMES 201

CHAPTER 8. INTELLECTUAL PROPERTY DISPUTES IN CYBERSPACE 230

CHAPTER 9. REGULATING COMMERCE AND SPEECH IN CYBERSPACE 269

CHAPTER 10. THE DIGITAL DIVIDE, DEMOCRACY, AND WORK 303

CHAPTER 11. ONLINE COMMUNITIES, CYBER IDENTITIES, AND SOCIAL NETWORKS 337

CHAPTER 12. ETHICAL ASPECTS OF EMERGING AND CONVERGING TECHNOLOGIES 368

GLOSSARY 411

INDEX 417

v

FTOC3GXML 10/20/2012 1:3:1 Page 6

FTOC3GXML 10/20/2012 1:3:1 Page 7

� TABLE OF CONTENTS

PREFACE xvii New to the Fourth Edition xviii Audience and Scope xix Organization and Structure of the Book xxi The Web Site for Ethics and Technology xxiii A Note to Students xxiv Note to Instructors: A Roadmap for Using This Book xxiv A Note to Computer Science Instructors xxv

ACKNOWLEDGMENTS xxvii FOREWORD xxix

c CHAPTER 1

INTRODUCTION TO CYBERETHICS: CONCEPTS, PERSPECTIVES, AND METHODOLOGICAL FRAMEWORKS 1

Scenario 1–1: A Fatal Cyberbullying Incident on MySpace 1 Scenario 1–2: Contesting the Ownership of a Twitter Account 2 Scenario 1–3: “The Washingtonienne” Blogger 2 1.1 Defining Key Terms: Cyberethics and Cybertechnology 3

1.1.1 What Is Cybertechnology? 4 1.1.2 Why the Term Cyberethics? 5

1.2 The Cyberethics Evolution: Four Developmental Phases in Cybertechnology 6 1.3 Are Cyberethics Issues Unique Ethical Issues? 9 Scenario 1–4: Developing the Code for a Computerized Weapon System 10 Scenario 1–5: Digital Piracy 11

1.3.1 Distinguishing between Unique Technological Features and Unique Ethical Issues 11

1.3.2 An Alternative Strategy for Analyzing the Debate about the Uniqueness of Cyberethics Issues 12

1.3.3 A Policy Vacuum in Duplicating Computer Software 13 1.4 Cyberethics as a Branch of Applied Ethics: Three Distinct Perspectives 14

1.4.1 Perspective #1: Cyberethics as a Field of Professional Ethics 15 1.4.2 Perspective #2: Cyberethics as a Field of Philosophical Ethics 18 1.4.3 Perspective #3: Cyberethics as a Field of Sociological/Descriptive Ethics 21

Scenario 1–6: The Impact of Technology X on the Pleasantville Community 21 1.5 A Comprehensive Cyberethics Methodology 24

1.5.1 A “Disclosive” Method for Cyberethics 25 1.5.2 An Interdisciplinary and Multilevel Method for Analyzing

Cyberethics Issues 26 1.6 A Comprehensive Strategy for Approaching Cyberethics Issues 27 1.7 Chapter Summary 28

vii

FTOC3GXML 10/20/2012 1:3:1 Page 8

Review Questions 28 Discussion Questions 29 Essay/Presentation Questions 29 Scenarios for Analysis 29 Endnotes 30 References 31 Further Readings 32 Online Resources 32

c CHAPTER 2

ETHICAL CONCEPTS AND ETHICAL THEORIES: ESTABLISHING AND JUSTIFYING A MORAL SYSTEM 33

2.1 Ethics and Morality 33 Scenario 2–1: The “Runaway Trolley”: A Classic Moral Dilemma 34

2.1.1 What Is Morality? 35 2.1.2 Deriving and Justifying the Rules and Principles of a Moral System 38

2.2 Discussion Stoppers as Roadblocks to Moral Discourse 42 2.2.1 Discussion Stopper #1: People Disagree on Solutions to

Moral Issues 43 2.2.2 Discussion Stopper #2: Who Am I to Judge Others? 45 2.2.3 Discussion Stopper #3: Morality Is Simply a Private Matter 47 2.2.4 Discussion Stopper #4: Morality Is Simply a Matter for Individual

Cultures to Decide 48 Scenario 2–2: The Perils of Moral Relativism 49 2.3 Why Do We Need Ethical Theories? 52 2.4 Consequence-Based Ethical Theories 53

2.4.1 Act Utilitarianism 55 Scenario 2–3: A Controversial Policy in Newmerica 55

2.4.2 Rule Utilitarianism 55 2.5 Duty-Based Ethical Theories 56

2.5.1 Rule Deontology 57 Scenario 2–4: Making an Exception for Oneself 58

2.5.2 Act Deontology 59 Scenario 2–5: A Dilemma Involving Conflicting Duties 60 2.6 Contract-Based Ethical Theories 61

2.6.1 Some Criticisms of Contract-Based Theories 62 2.6.2 Rights-Based Contract Theories 63

2.7 Character-Based Ethical Theories 64 2.7.1 Being a Moral Person vs. Following Moral Rules 64 2.7.2 Acquiring the “Correct” Habits 65

2.8 Integrating Aspects of Classical Ethical Theories into a Single Comprehensive Theory 66 2.8.1 Moor’s Just-Consequentialist Theory and Its Application to

Cybertechnology 67 2.8.2 Key Elements in Moor’s Just-Consequentialist Framework 69

2.9 Chapter Summary 70 Review Questions 70 Discussion Questions 71 Essay/Presentation Questions 71 Scenarios for Analysis 72 Endnotes 72

viii c Table of Contents

FTOC3GXML 10/20/2012 1:3:2 Page 9

References 73 Further Readings 73

c CHAPTER 3

CRITICAL REASONING SKILLS FOR EVALUATING DISPUTES IN CYBERETHICS 74

3.1 Getting Started 74 Scenario 3–1: Reasoning About Whether to Download a File from “Sharester” 75

3.1.1 Defining Two Key Terms in Critical Reasoning: Claims and Arguments 75 3.1.2 The Role of Arguments in Defending Claims 76 3.1.3 The Basic Structure of an Argument 76

3.2 Constructing an Argument 78 3.3 Valid Arguments 80 3.4 Sound Arguments 83 3.5 Invalid Arguments 85 3.6 Inductive Arguments 86 3.7 Fallacious Arguments 87 3.8 A Seven-Step Strategy for Evaluating Arguments 89 3.9 Identifying Some Common Fallacies 91

3.9.1 Ad Hominem Argument 92 3.9.2 Slippery Slope Argument 92 3.9.3 Fallacy of Appeal to Authority 93 3.9.4 False Cause Fallacy 93 3.9.5 Begging the Question 94 3.9.6 Fallacy of Composition/Fallacy of Division 94 3.9.7 Fallacy of Ambiguity/Equivocation 95 3.9.8 Appeal to the People (Argumentum ad Populum) 95 3.9.9 The Many/Any Fallacy 96 3.9.10 The Virtuality Fallacy 97

3.10 Chapter Summary 98 Review Questions 98 Discussion Questions 98 Essay/Presentation Questions 99 Scenarios for Analysis 99 Endnotes 99 References 100 Further Readings 100

c CHAPTER 4

PROFESSIONAL ETHICS, CODES OF CONDUCT, AND MORAL RESPONSIBILITY 101

4.1 Professional Ethics 102 4.1.1 What Is a Profession? 103 4.1.2 Who Is a Professional? 103 4.1.3 Who Is a Computer/IT Professional? 104

4.2 Do Computer/IT Professionals Have Any Special Moral Responsibilities? 105 4.2.1 Safety-Critical Software 105

4.3 Professional Codes of Ethics and Codes of Conduct 106 4.3.1 The Purpose of Professional Codes 107 4.3.2 Some Criticisms of Professional Codes 108 4.3.3 Defending Professional Codes 109 4.3.4 The IEEE-CS/ACM Software Engineering Code of Ethics and Professional

Practice 110

Table of Contents b ix

FTOC3GXML 10/20/2012 1:3:2 Page 10

4.4 Conflicts of Professional Responsibility: Employee Loyalty and Whistle-Blowing 112 4.4.1 Do Employees Have an Obligation of Loyalty to Employers? 112 4.4.2 Whistle-Blowing Issues 114

Scenario 4–1: Whistle-Blowing and the “Star Wars” Controversy 115 4.4.3 An Alternative Strategy for Understanding Professional Responsibility 117

4.5 Moral Responsibility, Legal Liability, and Accountability 117 4.5.1 Distinguishing Responsibility from Liability and Accountability 118 4.5.2 Accountability and the Problem of “Many Hands” 119

Scenario 4–2: The Therac-25 Machine 120 4.5.3 Legal Liability and Moral Accountability 120

4.6 Risk Assessment in the Software Development Process 121 Scenario 4–3: The Aegis Radar System 121 4.7 Do Some Computer Corporations Have Special Moral Obligations? 122

4.7.1 Special Responsibilities for Search Engine Companies 123 4.7.2 Special Responsibilities for Companies that Develop Autonomous Systems 124

4.8 Chapter Summary 125 Review Questions 126 Discussion Questions 126 Essay/Presentation Questions 126 Scenarios for Analysis 127 Endnotes 128 References 128 Further Readings 130

c CHAPTER 5

PRIVACY AND CYBERSPACE 131

5.1 Are Privacy Concerns Associated with Cybertechnology Unique or Special? 132 5.2 What is Personal Privacy? 134

5.2.1 Accessibility Privacy: Freedom from Unwarranted Intrusion 135 5.2.2 Decisional Privacy: Freedom from Interference in One’s

Personal Affairs 135 5.2.3 Informational Privacy: Control over the Flow of Personal

Information 136 5.2.4 A Comprehensive Account of Privacy 136

Scenario 5–1: Descriptive Privacy 137 Scenario 5–2: Normative Privacy 137

5.2.5 Privacy as “Contextual Integrity” 137 Scenario 5–3: Preserving Contextual Integrity in a University Seminar 138 5.3 Why is Privacy Important? 139

5.3.1 Is Privacy an Intrinsic Value? 140 5.3.2 Privacy as a Social Value 141

5.4 Gathering Personal Data: Monitoring, Recording, and Tracking Techniques 141 5.4.1 “Dataveillance” Techniques 141 5.4.2 Internet Cookies 142 5.4.3 RFID Technology 143 5.4.4 Cybertechnology and Government Surveillance 145

5.5 Exchanging Personal Data: Merging and Matching Electronic Records 146 5.5.1 Merging Computerized Records 146

Scenario 5–4: Merging Personal Information in Unrelated Computer Databases 147 5.5.2 Matching Computerized Records 148

Scenario 5–5: Using Biometric Technology at Super Bowl XXXV 149

x c Table of Contents

FTOC3GXML 10/20/2012 1:3:2 Page 11

5.6 Mining Personal Data 150 5.6.1 How Does Data Mining Threaten Personal Privacy? 150

Scenario 5–6: Data Mining at the XYZ Bank 151 5.6.2 Web Mining 154

Scenario 5–7: The Facebook Beacon Controversy 154 5.7 Protecting Personal Privacy in Public Space 156 Scenario 5–8: Shopping at SuperMart 157 Scenario 5–9: Shopping at Nile.com 157

5.7.1 Search Engines and the Disclosure of Personal Information 158 Scenario 5–10: Tracking Your Search Requests on Google 159

5.7.2 Accessing Online Public Records 160 Scenario 5–11: Accessing Online Public Records in Pleasantville 161 Scenario 5–12: Accessing a State’s Motor Vehicle Records Online 162 5.8 Privacy-Enhancing Technologies 162

5.8.1 Educating Users about PETs 163 5.8.2 PETs and the Principle of Informed Consent 163

5.9 Privacy Legislation and Industry Self-Regulation 164 5.9.1 Industry Self-Regulation Initiatives Regarding Privacy 164

Scenario 5–13: Controversies Involving Google’s Privacy Policy 166 5.9.2 Privacy Laws and Data Protection Principles 166

5.10 Chapter Summary 168 Review Questions 169 Discussion Questions 169 Essay/Presentation Questions 170 Scenarios for Analysis 170 Endnotes 171 References 171 Further Readings 173

c CHAPTER 6

SECURITY IN CYBERSPACE 174

6.1 Security in the Context of Cybertechnology 174 6.1.1 Cybersecurity as Related to Cybercrime 175 6.1.2 Security and Privacy: Some Similarities and Some Differences 175

6.2 Three Categories of Cybersecurity 176 6.2.1 Data Security: Confidentiality, Integrity, and Availability

of Information 177 6.2.2 System Security: Viruses, Worms, and Malware 178

Scenario 6–1: The Conficker Worm 178 6.2.3 Network Security: Protecting our Infrastructure 179

Scenario 6–2: The GhostNet Controversy 179 6.3 “Cloud Computing” and Security 180

6.3.1 Deployment and Service/Delivery Models for the Cloud 181 6.3.2 Securing User Data Residing in the Cloud 182

6.4 Hacking and “The Hacker Ethic” 183 6.4.1 What Is “The Hacker Ethic”? 184 6.4.2 Are Computer Break-ins Ever Ethically Justifiable? 186

6.5 Cyberterrorism 187 6.5.1 Cyberterrorism vs. Hacktivism 188

Scenario 6–3: Anonymous and the “Operation Payback” Attack 189 6.5.2 Cybertechnology and Terrorist Organizations 190

Table of Contents b xi

FTOC3GXML 10/20/2012 1:3:2 Page 12

6.6 Information Warfare (IW) 191 6.6.1 Information Warfare vs. Conventional Warfare 191

Scenario 6–4: The Stuxnet Worm and the “Olympic Games” Operation 192 6.6.2 Potential Consequences for Nations that Engage in IW 192

6.7 Cybersecurity and Risk Analysis 194 6.7.1 The Risk Analysis Methodology 194 6.7.2 The Problem of “De-Perimeterization” of Information Security for

Analyzing Risk 195 6.8 Chapter Summary 196 Review Questions 196 Discussion Questions 197 Essay/Presentation Questions 197 Scenarios for Analysis 197 Endnotes 198 References 198 Further Readings 200

c CHAPTER 7

CYBERCRIME AND CYBER-RELATED CRIMES 201

7.1 Cybercrimes and Cybercriminals 201 7.1.1 Background Events: A Brief Sketch 202 7.1.2 A Typical Cybercriminal 203

7.2 Hacking, Cracking, and Counterhacking 203 7.2.1 Hacking vs. Cracking 204 7.2.2 Active Defense Hacking: Can Acts of “Hacking Back” or Counter

Hacking Ever Be Morally Justified? 204 7.3 Defining Cybercrime 205

7.3.1 Determining the Criteria 206 7.3.2 A Preliminary Definition of Cybercrime 207

Scenario 7–1: Using a Computer to File a Fraudulent Tax Return 207 7.3.3 Framing a Coherent and Comprehensive Definition of Cybercrime 208

7.4 Three Categories of Cybercrime: Piracy, Trespass, and Vandalism in Cyberspace 208 7.5 Cyber-Related Crimes 209

7.5.1 Some Examples of Cyber-Exacerbated vs. Cyber-Assisted Crimes 209 7.5.2 Identity Theft 211

7.6 Technologies and Tools for Combating Cybercrime 213 Scenario 7–2: Intercepting Mail that Enters and Leaves Your Neighborhood 213

7.6.1 Biometric Technologies 214 7.6.2 Keystroke-Monitoring Software and Packet-Sniffing Programs 215

7.7 Programs and Techniques Designed to Combat Cybercrime in the United States 216 7.7.1 Entrapment and “Sting” Operations to Catch Internet Pedophiles 216

Scenario 7–3: Entrapment on the Internet 216 7.7.2 Enhanced Government Surveillance Techniques and the Patriot Act 217

7.8 National and International Laws to Combat Cybercrime 218 7.8.1 The Problem of Jurisdiction in Cyberspace 218

Scenario 7–4: A Virtual Casino 218 Scenario 7–5: Prosecuting a Computer Corporation in Multiple Countries 219

7.8.2 Some International Laws and Conventions Affecting Cybercrime 220 Scenario 7–6: The Pirate Bay Web Site 221 7.9 Cybercrime and the Free Press: The WikiLeaks Controversy 221

7.9.1 Are WikiLeaks’ Practices Ethical? 222

xii c Table of Contents

FTOC3GXML 10/20/2012 1:3:2 Page 13

7.9.2 Are WikiLeaks’ Practices Criminal? 222 7.9.3 WikiLeaks and the Free Press 223

7.10 Chapter Summary 225 Review Questions 225 Discussion Questions 226 Essay/Presentation Questions 226 Scenarios for Analysis 226 Endnotes 227 References 228 Further Readings 229

c CHAPTER 8

INTELLECTUAL PROPERTY DISPUTES IN CYBERSPACE 230

8.1 What is Intellectual Property? 230 8.1.1 Intellectual Objects 231 8.1.2 Why Protect Intellectual Objects? 232 8.1.3 Software as Intellectual Property 232 8.1.4 Evaluating an Argument for Why It is Wrong to Copy

Proprietary Software 233 8.2 Copyright Law and Digital Media 235

8.2.1 The Evolution of Copyright Law in the United States 235 8.2.2 The Fair-Use and First-Sale Provisions of Copyright Law 236

Scenario 8–1: Making Classic Books Available Online 237 Scenario 8–2: Decrypting Security on an e-Book Reader 237

8.2.3 Software Piracy as Copyright Infringement 238 8.2.4 Napster and the Ongoing Battles over Sharing Digital Music 239

Scenario 8–3: The Case of MGM v. Grokster 241 8.3 Patents, Trademarks, and Trade Secrets 242

8.3.1 Patent Protections 242 8.3.2 Trademarks 243 8.3.3 Trade Secrets 243

8.4 Jurisdictional Issues Involving Intellectual Property Laws 244 8.5 Philosophical Foundations for Intellectual Property Rights 245

8.5.1 The Labor Theory of Property 245 Scenario 8–4: DEF Corporation vs. XYZ Inc. 246

8.5.2 The Utilitarian Theory of Property 247 Scenario 8–5: Sam’s e-Book Reader Add-on Device 247

8.5.3 The Personality Theory of Property 248 Scenario 8–6: Angela’s Bþþ Programming Tool 249 8.6 The Free Software and the Open Source Movements 250

8.6.1 GNU and the Free Software Foundation 250 8.6.2 The “Open Source Software” Movement: OSS vs. FSF 251

8.7 The “Common-Good” Approach: An Alternative Framework for Analyzing the Intellectual Property Debate 252 8.7.1 Information Wants to be Shared vs. Information Wants to be Free 254 8.7.2 Preserving the Information Commons 256 8.7.3 The Fate of the Information Commons: Could the Public Domain of

Ideas Eventually Disappear? 257 8.7.4 The Creative Commons 259

8.8 PIPA, SOPA, and RWA Legislation: Current Battlegrounds in the Intellectual Property War 260

Table of Contents b xiii

FTOC3GXML 10/20/2012 1:3:2 Page 14

8.8.1 The PIPA and SOPA Battles 261 8.8.2 RWA and Public Access to Health-Related Information 261

Scenario 8–7: Elsevier Press and “The Cost of Knowledge” Boycott 262 8.8.3 Intellectual Property Battles in the Near Future 263

8.9 Chapter Summary 264 Review Questions 264 Discussion Questions 265 Essay/Presentation Questions 265 Scenarios for Analysis 265 Endnotes 266 References 267 Further Readings 268

c CHAPTER 9

REGULATING COMMERCE AND SPEECH IN CYBERSPACE 269

9.1 Background Issues and Some Preliminary Distinctions 270 9.1.1 The Ontology of Cyberspace: Is the Internet a Medium or a Place? 270 9.1.2 Two Categories of Cyberspace Regulation 271

9.2 Four Modes of Regulation: The Lessig Model 273 9.3 Digital Rights Management and the Privatization of Information Policy 274

9.3.1 DRM Technology: Implications for Public Debate on Copyright Issues 274 Scenario 9–1: The Sony Rootkit Controversy 275

9.3.2 Privatizing Information Policy: Implications for the Internet 276 9.4 The Use and Misuse of (HTML) Metatags and Web Hyperlinks 278

9.4.1 Issues Surrounding the Use/Abuse of HTML Metatags 278 Scenario 9–2: A Deceptive Use of HTML Metatags 279

9.4.2 Hyperlinking and Deep Linking 279 Scenario 9–3: Deep Linking on the Ticketmaster Web Site 280 9.5 E-Mail Spam 281

9.5.1 Defining Spam 281 9.5.2 Why Is Spam Morally Objectionable? 282

9.6 Free Speech vs. Censorship and Content Control in Cyberspace 284 9.6.1 Protecting Free Speech 284 9.6.2 Defining Censorship 285

9.7 Pornography in Cyberspace 286 9.7.1 Interpreting “Community Standards” in Cyberspace 286 9.7.2 Internet Pornography Laws and Protecting Children Online 287 9.7.3 Virtual Child Pornography 288

Scenario 9–4: A Sexting Incident Involving Greensburg Salem High School 290 9.8 Hate Speech and Speech that can Cause Physical Harm to Others 292

9.8.1 Hate Speech on the Web 292 9.8.2 Online “Speech” that Can Cause Physical Harm to Others 294

9.9 “Network Neutrality” and the Future of Internet Regulation 294 9.9.1 Defining Network Neutrality 295 9.9.2 SomeArgumentsAdvanced byNetNeutrality’s Proponents andOpponents 296 9.9.3 Future Implications for the Net Neutrality Debate 296

9.10 Chapter Summary 297 Review Questions 298 Discussion Questions 298 Essay/Presentation Questions 299 Scenarios for Analysis 299 Endnotes 300

xiv c Table of Contents

FTOC3GXML 10/20/2012 1:3:2 Page 15

References 300 Further Readings 301

c CHAPTER 10

THE DIGITAL DIVIDE, DEMOCRACY, AND WORK 303

10.1 The Digital Divide 304 10.1.1 The Global Digital Divide 304 10.1.2 The Digital Divide within Nations 305

Scenario 10–1: Providing In-Home Internet Service for Public School Students 306 10.1.3 Is the Digital Divide an Ethical Issue? 307

10.2 Cybertechnology and the Disabled 309 10.2.1 Disabled Persons and Remote Work 310 10.2.2 Arguments for Continued WAI Support 311

10.3 Cybertechnology and Race 312 10.3.1 Internet Usage Patterns 312 10.3.2 Racism and the Internet 313

10.4 Cybertechnology and Gender 314 10.4.1 Access to High-Technology Jobs 315 10.4.2 Gender Bias in Software Design and Video Games 317

10.5 Cybertechnology, Democracy, and Democratic Ideals 317 10.5.1 Has Cybertechnology Enhanced or Threatened Democracy? 318 10.5.2 How has Cybertechnology Affected Political Elections in

Democratic Nations? 322 10.6 The Transformation and the Quality of Work 324

10.6.1 Job Displacement and the Transformed Workplace 324 10.6.2 The Quality of Work Life in the Digital Era 328

Scenario 10–2: Employee Monitoring and the Case of Ontario v. Quon 329 10.7 Chapter Summary 331 Review Questions 332 Discussion Questions 332 Essay/Presentation Questions 333 Scenarios for Analysis 333 Endnotes 334 References 335 Further Readings 336

c CHAPTER 11

ONLINE COMMUNITIES, CYBER IDENTITIES, AND SOCIAL NETWORKS 337

11.1 Online Communities and Social Networking Services 337 11.1.1 Online Communities vs. Traditional Communities 337 11.1.2 Blogs in the Context of Online Communities 339 11.1.3 Assessing Pros and Cons of Online Communities 339

Scenario 11–1: A Virtual Rape in Cyberspace 342 11.2 Virtual Environments and Virtual Reality 343

11.2.1 What is Virtual Reality (VR)? 344 11.2.2 Ethical Controversies Involving Behavior in VR Applications and Games 345 11.2.3 Misrepresentation, Bias, and Indecent Representations in VR Applications 349

11.3 Cyber Identities and Cyber Selves: Personal Identity and Our Sense of Self in the Cyber Era 351 11.3.1 Cybertechnology as a “Medium of Self-Expression” 352 11.3.2 “MUD Selves” and Distributed Personal Identities 352 11.3.3 The Impact of Cybertechnology on Our Sense of Self 353

11.4 AI and its Implications for What it Means to be Human 355

Table of Contents b xv

FTOC3GXML 10/20/2012 1:3:2 Page 16

11.4.1 What is AI? A Brief Overview 355 11.4.2 The Turing Test and John Searle’s “Chinese Room” Argument 357 11.4.3 Cyborgs and Human-Machine Relationships 358

Scenario 11–2: Artificial Children 361 11.4.4 Do (At Least Some) AI Entities Warrant Moral Consideration? 361

11.5 Chapter Summary 363 Review Questions 363 Discussion Questions 364 Essay/Presentation Questions 364 Scenarios for Analysis 365 Endnotes 365 References 366 Further Readings 367

c CHAPTER 12

ETHICAL ASPECTS OF EMERGING AND CONVERGING TECHNOLOGIES 368

12.1 Converging Technologies and Technological Convergence 368 12.2 Ambient Intelligence (AmI) and Ubiquitous Computing 369

12.2.1 Pervasive Computing 371 12.2.2 Ubiquitous Communication 371 12.2.3 Intelligent User Interfaces 371 12.2.4 Ethical and Social Issues in AmI 372

Scenario 12–1: E. M. Forster’s Precautionary Tale 373 Scenario 12–2: Jeremy Bentham’s Panopticon 375 12.3 Bioinformatics and Computational Genomics 376

12.3.1 Computing and Genetic “Machinery”: Some Conceptual Connections 376 12.3.2 Ethical Issues and Controversies 376

Scenario 12–3: deCODE Genetics Inc. 377 12.3.3 ELSI Guidelines and Genetic-Specific Legislation 380

12.4 Nanotechnology and Nanocomputing 381 12.4.1 Nanotechnology: A Brief Overview 382 12.4.2 Optimistic vs. Pessimistic Views of Nanotechnology 383 12.4.3 Ethical Issues in Nanotechnology and Nanocomputing 386

12.5 Autonomous Machines and Machine Ethics 389 12.5.1 What is an Autonomous Machine (AM)? 390 12.5.2 Some Ethical and Philosophical Questions Involving AMs 393 12.5.3 Machine Ethics and Moral Machines 398

12.6 A “Dynamic” Ethical Framework for Guiding Research in New and Emerging Technologies 402 12.6.1 Is an ELSI-Like Model Adequate for New/Emerging Technologies? 402 12.6.2 A “Dynamic Ethics” Model 403

12.7 Chapter Summary 404 Review Questions 404 Discussion Questions 405 Essay/Presentation Questions 405 Scenarios for Analysis 405 Endnotes 406 References 407 Further Readings 409

GLOSSARY 411

INDEX 417

xvi c Table of Contents

FPREF3GXML 10/20/2012 1:5:10 Page 17

c

PREFACE

As the digital landscape continues to evolve at a rapid pace, new variations of moral, legal, and social concerns arise along with it. Not surprisingly, then, an additional cluster of cyberethics issues has emerged since the publication of the previous edition of Ethics and Technology in late 2009. Consider, for example, the ways in which Cloud- based storage threatens the privacy and security of our personal data. Also consider the increasing amount of personal data that social networking sites such as Facebook and major search engine companies such as Google now collect. Should we worry about how that information can be subsequently used? Should we also worry about the filtering techniques that leading search engines now use to tailor or “personalize” the results of our search queries based on profiles derived from information about our previous search requests? Some analysts note that the current information-gathering/profiling practices and techniques used in the commercial sector can also be adopted by governments, and they point out that these practices could not only support the surveillance initiatives of totalitarian governments but could also threaten the privacy of citizens in democratic countries as well.

Also consider the impact that recent cyberwarfare activities, including the clan- destine cyberattacks allegedly launched by some nation sates, could have for our national infrastructure. Additionally, consider the national-security-related concerns raised by the WikiLeaks controversy, which has also exacerbated an ongoing tension between free speech on the Internet vs. standards for “responsible reporting” on the part of investigative journalists. And the recent debate about “network neutrality” causes us to revisit questions about the extent to which the service providers responsi- ble for delivering online content should also be able to control the content that they deliver.

Other kinds of concerns now arise because of developments in a relatively new subfield of cyberethics called “machine ethics” (sometimes referred to as “robo-ethics”). For example, should we develop autonomous machines that are capable of making decisions that have moral implications? Some semiautonomous robots, which serve as companions and caregivers for the elderly and as “babysitters” for young children, are already available. Recent and continued developments in robotics and autonomous machines may provide many conveniences and services, but they can also cause us to question our conventional notions of autonomy, moral agency, and trust. For example, can/should these machines be fully autonomous? Can they qualify as (artificial) moral agents? Also, will humans be able to trust machines that they will increasingly rely on to carry out critical tasks? If we do not yet know the answers to these questions, and if no clear and explicit policies are in place to guide research in this area, should we continue to develop autonomous machines? These and related questions in the emerging

xvii

FPREF3GXML 10/20/2012 1:5:10 Page 18

field of machine ethics are but a few of the many new questions we examine in the fourth edition of Ethics and Technology.

Although new technologies emerge, and existing technologies continue to mature and evolve, many of the ethical issues associated with them are basically variations of existing ethical problems. At bottom, these issues reduce to traditional ethical concerns having to do with dignity, respect, fairness, obligations to assist others in need, and so forth. So, we should not infer that the moral landscape itself has been altered because of behaviors made possible by these technologies. We will see that, for the most part, the new issues examined in this edition of Ethics and Technology are similar in relevant respects to the kinds of ethical issues we examined in the book’s previous editions. However, many emerging technologies present us with challenges that, initially at least, do not seem to fit easily into our conventional ethical categories. So, a major objective of this textbook is to show how those controversies can be analyzed from the perspective of standard ethical concepts and theories.

The purpose ofEthics and Technology, as stated in the prefaces to the three previous editions of this book, is to introduce students to issues and controversies that comprise the relatively new field of cyberethics. The term “cyberethics” is used in this textbook to refer to the field of study that examines moral, legal, and social issues involving cybertechnology. Cybertechnology, in turn, refers to a broad spectrum of computing/ information and communication technologies that range from stand-alone computers to the current cluster of networked devices and technologies. Many of these technologies include devices and applications that are connected to privately owned computer networks as well as to the Internet itself.

This textbook examines a wide range of cyberethics issues—from specific issues of moral responsibility that directly affect computer and information technology (IT) professionals to broader social and ethical concerns that affect each of us in our day- to-day lives. Questions about the roles and responsibilities of computer/IT professionals in developing safe and reliable computer systems are examined under the category of professional ethics. Broader social and ethical concerns associated with cybertechnology are examined under topics such as privacy, security, crime, intellectual property, Internet regulation, and so forth.

c NEW TO THE FOURTH EDITION

New pedagogical material includes

� a newly designed set of end-of-chapter exercises called “Scenarios for Analysis,” which can be used for either in-class analysis or group projects;

� new and/or updated (in-chapter) scenarios, illustrating both actual cases and hypothetical situations, which enable students to apply methodological concepts/ frameworks and ethical theories covered in Chapters 1 and 2;

� new sample arguments in some chapters, which enable students to apply the tools for argument analysis covered in Chapter 3;

� updated “review questions,” “discussion questions,” and “essay/presentation questions” at the end of chapters;

xviii c Preface

FPREF3GXML 10/20/2012 1:5:10 Page 19

� an updated and revised glossary of key terms used in the book; � an updated Ethics and Technology Companion Site with new resources and

materials for students and instructors.

New issues examined and analyzed include

� ethical and social aspects of Cloud computing, including concerns about the privacy and security of users’ data that is increasingly being stored in “the Cloud”;

� concerns about the “personalization filters” that search engine companies use to tailor our search results to conform to their perceptions of what we want.

� questions about Google’s (2012) privacy policy vis-�a-vis the amount of user data that can be collected via the search engine company’s suite of applications;

� concerns about cyberwarfare activities involving nation states and their alleged launching of the Stuxnet worm and Flame virus;

� controversies surrounding WikiLeaks and the tension it creates between free speech and responsible journalism, as well as for concerns involving national security;

� concerns affecting “network neutrality” and whether regulation may be required to ensure that Internet service providers do not gain too much control over the content they deliver;

� controversies in “machine ethics,” including the development of autonomous machines capable of making decisions that have moral impacts;

� questions about whether we can trust artificial agents to act in ways that will always be in the best interests of humans.

In revising the book, I have also eliminated some older, now out-of-date, material. Additionally, I have streamlined some of the material that originally appeared in previous editions of the book but still needs to be carried over into the present edition.

c AUDIENCE AND SCOPE

Because cyberethics is an interdisciplinary field, this textbook aims at reaching several audiences and thus easily runs the risk of failing to meet the needs of any one audience. I have nonetheless attempted to compose a textbook that addresses the needs of computer science, philosophy, social/behavioral science, and library/information science students. Computer science students need a clear understanding of the ethical challenges they will face as computer professionals when they enter the workforce. Philosophy students, on the contrary, should understand how moral issues affecting cybertechnology can be situated in the field of applied ethics in general and then analyzed from the perspective of ethical theory. Social science and behavioral science students will likely want to assess the sociological impact of cybertechnology on our social and political institutions (govern- ment, commerce, and education) and sociodemographic groups (affecting gender, race, ethnicity, and social class). And library science and information science students should be aware of the complexities and nuances of current intellectual property laws that threaten unfettered access to electronic information, and should be informed about recent regulatory schemes that threaten to censor certain forms of electronic speech.

Preface b xix

FPREF3GXML 10/20/2012 1:5:10 Page 20

Students from other academic disciplines should also findmany issues covered in this textbook pertinent to their personal and professional lives; some undergraduates may elect to take a course in social and ethical aspects of technology to satisfy one of their general education requirements. Although Ethics and Technology is intended mainly for undergraduate students, it could be used, in conjunction with other texts, in graduate courses as well.

We examine ethical controversies using scenarios that include both actual cases and hypothetical examples, wherever appropriate. In some instances I have deliberately constructed provocative scenarios and selected controversial cases to convey the severity of the ethical issues we consider. Some readers may be uncomfortable with, and possibly even offended by, these scenarios and cases—for example, those illustrating unethical practices that negatively affect children and minorities. Although it might have been politically expedient to skip over issues and scenarios that could unintentionally offend certain individuals, I believe that no textbook in applied ethics would do justice to its topic if it failed to expose and examine issues that adversely affect vulnerable groups in society.

Also included in most chapters are sample arguments that are intended to illustrate some of the rationales that have been put forth by various interest groups to defend policies and laws affecting privacy, security, property, and so forth, in cyberspace. Instructors and students can evaluate these arguments via the rules and criteria estab- lished in Chapter 3 to see how well, or how poorly, the premises in these arguments succeed in establishing their conclusions.

Exercise questions are included at the end of each chapter. First, basic “review questions” quiz the reader’s comprehension of key concepts, themes, issues, and scenarios covered in that chapter. These are followed by higher level “discussion questions” designed to encourage students to reflect more deeply on some of the contro- versial issues examined in the chapter. In addition to “essay/presentation questions” that are also included in each chapter, a new set of “Scenarios for Analysis” have been added in response to instructors who requested the addition of some unanalyzed scenarios for classroom use. Building on the higher level nature of the discussion questions and essay/presentation questions, these scenarios are intended to provide students and instructors with additional resources for analyzing important controversies intro- duced in the various chapters. As such, these scenarios can function as in-class resources for group projects.

Some essay/presentation questions and end-of-chapter scenarios ask students to compare and contrast arguments and topics that span multiple chapters; for example, students are asked to relate arguments used to defend intellectual property rights, considered in Chapter 8, to arguments for protecting privacy rights, examined in Chapter 5. Other questions and scenarios ask students to apply foundational concepts and frameworks, such as ethical theory and critical thinking techniques introduced in Chapters 2 and 3, to the analysis of specific cyberethics issues examined in subsequent chapters. In some cases, these end-of-chapter questions and scenarios may generate lively debate in the classroom; in other cases, they can serve as a point of departure for various class assignments and group projects. Although no final “solutions” to the issues and dilemmas raised in these questions and scenarios are provided in the text, some “strategies” for analyzing them are included in the section of the book’s Web site (www.w iley.co m/college /tava ni) entit led “Strate gies for Discussi on Quest ions. ”

xx c Preface

http://www.wiley.com/college/tavani
FPREF3GXML 10/20/2012 1:5:11 Page 21

c ORGANIZATION AND STRUCTURE OF THE BOOK

Ethics and Technology is organized into 12 chapters. Chapter 1, “Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks,” defines key concepts and terms that will appear throughout the book. For example, definitions of terms such as cyberethics and cybertechnology are introduced in this chapter. We then examine whether any ethical issues involving cybertechnology are unique ethical issues. We also consider how we can approach cyberethics issues from three different perspec- tives: professional ethics, philosophical ethics, and sociological/descriptive ethics, each of which represents the approach generally taken by a computer scientist, a philosopher, and a social/behavioral scientist. Chapter 1 concludes with a proposal for a comprehen- sive and interdisciplinary methodological scheme for analyzing cyberethics issues from these perspectives.

In Chapter 2, “Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System,”we examine some of the basic concepts that make up a moral system. We draw a distinction between “ethics” and “morality” by defining ethics as “the study of morality.” “Morality,” or a moral system, is defined as an informal, public system comprising rules of conduct and principles for evaluating those rules. We then examine consequence-based, duty-based, character-based, and contract-based ethical theories. Chapter 2 concludes with a model that integrates elements of competing ethical theories into one comprehensive and unified theory.

Chapter 3, “Critical Reasoning Skills for Evaluating Disputes in Cyberethics,” includes a brief overview of basic concepts and strategies that are essential for debating moral issues in a structured and rational manner. We begin by describing the structure of a logical argument and show how arguments can be constructed and analyzed. Next, we examine a technique for distinguishing between arguments that are valid and invalid, sound and unsound, and inductive and fallacious. We illustrate examples of each type with topics affecting cybertechnology and cyberethics. Finally, we identify some strate- gies for spotting and labeling “informal” logical fallacies that frequently occur in everyday discourse.

Chapter 4, “Professional Ethics, Codes of Conduct, and Moral Responsibility,” examines issues related to professional responsibility for computer/IT professionals. We consider whether there are any special moral responsibilities that computer/IT professionals have as professionals. We then examine some professional codes of conducted that have been adopted by computer organizations. We also ask: To what extent are software engineers responsible for the reliability of the computer systems they design and develop, especially applications that include “life-critical” and “safety- critical” software? Are computer/IT professionals ever permitted, or perhaps even required, to “blow the whistle” when they have reasonable evidence to suggest that a computer system is unreliable? Finally, we examine some schemes for analyzing risks associated with the development of safety-critical software.

We discuss privacy issues involving cybertechnology in Chapter 5. First, we examine the concept of privacy as well as some arguments for why privacy is considered an important human value. We then look at how personal privacy is threatened by the kinds of surveillance techniques and data-collection schemes made possible by cybertechnol- ogy. Specific data-gathering and data-exchanging techniques are examined in detail. We next consider some challenges that data mining and Web mining pose for protecting

Preface b xxi

FPREF3GXML 10/20/2012 1:5:11 Page 22

personal privacy in public space. In Chapter 5, we also consider whether technology itself, in the form of privacy-enhancing technologies (or PETs), can provide an adequate solution to some privacy issues generated by cybertechnology.

Chapter 6, “Security in Cyberspace,” examines security threats in the context of computers and cybertechnology. Initially, we differentiate three distinct senses of “security”: data security, system security, and network security. We then examine the concepts of “hacker” and “hacker ethic,” and we ask whether computer break-ins can ever be morally justified. Next, we differentiate acts of “hacktivism,” cyberterrorism, and information warfare. Chapter 6 concludes with a brief examination of risk analysis in the context of cybersecurity.

We begin our analysis of cybercrime, in Chapter 7, by considering whether we can construct a profile of a “typical” cybercriminal. We then propose a definition of cybercrime that enables us to distinguish between “cyberspecific” and “cyber-related” crimes to see whether such a distinction would aid in the formulation of more coherent cybercrime laws. We also consider the notion of legal jurisdiction in cyberspace, especially with respect to the prosecution of cybercrimes that involve interstate and international venues. In addition, we examine technological efforts to combat cyber- crime, such as controversial uses of biometric technologies.

Chapters 8 and 9 examine legal issues involving intellectual property and free speech, respectively, as they relate to cyberspace. One objective of Chapter 8, “Intellectual Property Disputes in Cyberspace,” is to show why an understanding of the concept of intellectual property is important in an era of digital information. We consider three theories of property rights and make important distinctions among legal concepts such as copyright law, patent protection, and trademarks. Additionally, we consider specific scenarios involving intellectual property disputes, including the original Napster contro- versy as well as some recent peer-to-peer (P2P) networks that have been used for file sharing. We also examine the Free Software and the Open Source Software initiatives. Finally, we consider a compromise solution that supports and encourages the sharing of digital information in an era when strong copyright legislation seems to discourage that practice.

Chapter 9, “Regulating Commerce and Speech in Cyberspace,” looks at additional legal issues, especially as they involve regulatory concerns in cyberspace. We draw distinctions between two different senses of “regulation” as it applies to cyberspace, and we also consider whether the Internet should be understood as a medium or as a “place.” We also examine controversies surrounding e-mail spam, which some believe can be viewed as a formof “speech” in cyberspace.We then askwhether all formsof online speech should be granted legal protection; for example, should child pornography, hate speech, and speech that can cause physical harm to others be tolerated in online forums?

Chapter 10 examines a wide range of equity-and-access issues from the perspective of cybertechnology’s impact for sociodemographic groups (affecting class, race, and gender). The chapter begins with an analysis of global aspects of the “digital divide.”We then examine specific equity-and-access issues affecting disabled persons, racial minor- ities, and women. Next, we explore the relationship between cybertechnology and democracy, and we consider whether the Internet facilitates democracy or threatens it. We then examine some social and ethical issues affecting employment in the contemporary workplace, and we ask whether the use of cybertechnology has trans- formed work and has affected the overall quality of work life.

xxii c Preface

FPREF3GXML 10/20/2012 1:5:11 Page 23

In Chapter 11, we examine issues pertaining to online communities, virtual-reality (VR) environments, and artificial intelligence (AI) developments in terms of two broad themes: community and personal identity in cyberspace. We begin by analyzing the impact that cybertechnology has for our traditional understanding of the concept of community. In particular, we ask whether online communities, such as Facebook and Twitter, raise any special ethical or social issues. Next, we examine some implications that behaviors made possible by virtual environments and virtual-reality applications have for our conventional understanding of personal identity. The final section of Chapter 11 examines the impact that developments in AI have for our sense of self and for what it means to be human.

Chapter 12, the final chapter of Ethics and Technology, examines some ethical challenges that arise in connection with emerging and converging technologies. We note that cybertechnology is converging with noncybertechnologies, including biotechnology and nanotechnology, generating new fields such as bioinformatics and nanocomputing that, in turn, introduce ethical concerns. Chapter 12 also includes a brief examination of some issues in the emerging (sub)field of machine ethics. Among the questions consid- ered are whether we should develop autonomous machines that are capable of making moral decisions and whether we could trust those machines to always act in our best interests.

A Glossary that defines terms commonly used in the context of computer ethics and cyberethics is also included. However, the glossary is by no means intended as an exhaustive list of such terms. Additional material for this text is available on the book’s W eb site : www .wiley.c om/col lege.tava ni.

c THEWEB SITE FOR ETHICS AND TECHNOLOGY

Seven appendices for Ethics and Technology are available only in online format. Appendices A through E include the full text of five professional codes of ethics: the ACM Code of Ethics and Professional Conduct, the Australian Computer Society Code of Ethics, the British Computer Society Code of Conduct, the IEEE Code of Ethics, and the IEEE-CS/ACM Software Engineering Code of Ethics and Professional Practice, respectively. Specific sections of these codes are included in hardcopy format as well, in relevant sections of Chapter 4. Two appendices, F and G, are also available online. Appendix F contains the section of the IEEE-CS/ACM Computing Curricula 2001 Final Report that describes the social, professional, and ethical units of instruction mandated in their computer science curriculum. Appendix G provides some additional critical reasoning techniques that expand on the strategies introduced in Chapter 3.

The Web site for Ethics and Technology also contains additional resources for instructors and students. Presentation slides in PowerPoint format for Chapters 1–12, as well as graphics (for tables and figures in each chapter), are available in the “Instructor” and “Student” sections of the site. As noted earlier, a section on “Strategies,” which includes some techniques for answering the discussion questions and unanalyzed sce- narios included at the end of each of the book’s 12 chapters, is also included on this site.

The book’s Web site is intended as an additional resource for both instructors and students. It also enables me to “update the book,” in between editions, with new issues and scenarios in cyberethics, as they arise. For example, a section entitled “Recent

Preface b xxiii

http://www.wiley.com/college.tavani
FPREF3GXML 10/20/2012 1:5:11 Page 24

Controversies” is included on the book’s Web site. I invite your feedback as to how this site can be continually improved.

c A NOTE TO STUDENTS

If you are taking an ethics course for the first time, you might feel uncomfortable with the prospect of embarking on a study of moral issues and controversial topics that might initially cause you discomfort because ethics is sometimes perceived to be preachy, and its subject matter is sometimes viewed as essentially personal and private in nature. Because these are common concerns, I address them early in the textbook. I draw a distinction between an ethicist, who studies morality or a “moral system,” and a moralist who may assume to have the correct answers to all of the questions; note that a primary objective of this book is to examine and analyze ethical issues, not to presume that any of us already has the correct answer to any of the questions I consider.

To accomplish this objective, I introduce three types of conceptual frameworks early in the textbook. In Chapter 1, I provide a methodological scheme that enables you to identify controversial problems and issues involving cybertechnology as ethical issues. The conceptual scheme included in Chapter 2, based on ethical theory, provides some general principles that guide your analysis of specific cases as well as your deliberations about which kinds of solutions to problems should be proposed. A third, and final, conceptual framework is introduced in Chapter 3 in the form of critical reasoning techniques, which provides rules and standards that you can use for evaluating the strengths of competing arguments and for defending a particular position that you reach on a certain issue.

This textbook was designed and written for you, the student! Whether or not it succeeds in helping you to meet the objectives of a course in cyberethics is very important to me, so I welcome your feedback on this textbook; and I would sincerely appreciate hearing your ideas on how this textbook could be improved. Please feel free to write to me with your suggestions, comments, and so forth. My email address is htavani@rivier .edu. I look forward to hearing from you!

c NOTE TO INSTRUCTORS: A ROADMAP FOR USING THIS BOOK

The chapters that make up Ethics and Technology are sequenced so that readers are exposed to foundational issues and conceptual frameworks before they examine specific problems in cyberethics. In some cases, it may not be possible for instructors to cover all of the material in Chapters 1–3. It is strongly recommended, however, that before students are assigned material in Chapter 4, they at least read Sections 1.1, 1.4–1.5, 2.4– 2.8, and 3.1. Instructors using this textbook can determine which chapters best accom- modate their specific course objectives. Computer science instructors, for example, will likely want to assign Chapter 4, on professional ethics and responsibility, early in the term. Social science instructors, on the other hand, will likely examine issues discussed in Chapters 10 and 11 early in their course. Philosophy instructors may wish to structure their courses beginning with a thorough examination of the material on ethical concepts

xxiv c Preface

FPREF3GXML 10/20/2012 1:5:11 Page 25

and ethical theory in Chapter 2 and techniques for evaluating logical arguments in Chapter 3. Issues discussed in Chapter 12 may be of particular interest to CS instructors teaching advanced undergraduate students.

Many textbooks in applied ethics include a requisite chapter on ethical concepts/ theory at the beginning of the book. Unfortunately, they often treat them in a cursory manner; furthermore, these ethical concepts and theories are seldom developed and reinforced in the remaining chapters. Thus, readers often experience a “disconnect” between the material included in the book’s opening chapter and the content of the specific cases and issues discussed in subsequent chapters. By incorporating elements of ethical theory into my discussion and analysis of the specific cyberethics issues I examine, I have tried to avoid the “disconnect” between theory and practice that is commonplace in many applied ethics textbooks.

c ANOTE TO COMPUTER SCIENCE INSTRUCTORS

Ethics and Technology can be used as the main text in a course dedicated to ethical and social issues in computing, or it can be used as a supplementary textbook for computer science courses in which one or more ethics modules are included. As I suggested in the preceding section, instructors may find it difficult to cover all of the material included in this book in the course of a single semester. And as I also previously suggested, computer science instructors will likely want to ensure that they allocate sufficient course time to the professional ethical issues discussed in Chapter 4. Also of special interest to computer science instructors and their students will be the sections on computer security and risk analysis in Chapter 6; open source code and intellectual property issues in Chapter 8; and regulatory issues affecting software code in Chapter 9. Because computer science instructors may need to limit the amount of class time they devote to covering founda- tional concepts included in the earlier chapters, I recommend covering at least the critical sections of Chapters 1–3 described previously. This should provide computer science students with some of the tools they will need as professionals to deliberate on ethical issues and to justify the positions they reach.

In designing this textbook, I took into account the guidelines on ethical instruction included in the Computing Curricula 2001 Final Report, issued in December 2001 by the IEEE-CS/ACM Joint Task Force on Computing Curricula, which recommends the inclusion of 16 core hours of instruction on social, ethical, and professional topics in the curriculum for undergraduate computer science students. [See the online Appendix F at www .wiley.c om/college .tavani for de tailed inform ation abo ut the social /professiona l (SP) units in the Computing Curricula 2001.] Each topic prefaced with an SP designation defines one “knowledge area” or a CS “body of knowledge.” They are distributed among the following 10 units:

SP1: History of computing (e.g., history of computer hardware, software, and networking)

SP2: Social context of computing (e.g., social implications of networked computing, gender-related issues, and international issues)

Preface b xxv

http://www.wiley.com/college.tavani
FPREF3GXML 10/20/2012 1:5:11 Page 26

SP3: Methods and tools of analysis (e.g., identifying assumptions and values, making and evaluating ethical arguments)

SP4: Professional and ethical responsibilities (e.g., the nature of professionalism, codes of ethics, ethical dissent, and whistle-blowing)

SP5: Risks and liabilities of computer-based systems (e.g., historical examples of software risks)

SP6: Intellectual property (e.g., foundations of intellectual property, copyrights, patents, and software piracy)

SP7: Privacy and civil liberties (e.g., ethical and legal basis for privacy protection, technological strategies for privacy protection)

SP8: Computer crime (e.g., history and examples of computer crime, hacking, viruses, and crime prevention strategies)

SP9: Economic issues in computing (e.g., monopolies and their economic implications; effect of skilled labor supply)

SP10: Philosophical frameworks (e.g., ethical theory, utilitarianism, relativism)

All 10 SP units are covered in this textbook. Topics described in SP1 are examined in Chapters 1 and 10, and topics included in SP2 are discussed in Chapters 1 and 11. The methods and analytical tools mentioned in SP3 are described at length in Chapters 2 and 3, whereas professional issues involving codes of conduct and professional responsibility described in SP4 are included in Chapters 4 and 12. Also discussed in Chapter 4, as well as in Chapter 6, are issues involving risks and liabilities (SP5). Intellectual property issues (SP6) are discussed in detail in Chapter 8 and in certain sections of Chapter 9, whereas privacy and civil liberty concerns (SP7) are discussed mainly in Chapters 5 and 12. Chapters 6 and 7 examine topics described in SP8. Economic issues (SP9) are considered in Chapters 9 and 10. And philosophical frameworks of ethics, including ethical theory (SP10), are discussed in Chapters 1 and 2.

Table 1 illustrates the corresponding connection between SP units and the chapters of this book.

TABLE 1 SP (“Knowledge”) Units and Corresponding Book Chapters

SP unit 1 2 3 4 5 6 7 8 9 10 Chapter(s) 1, 10 1, 11 2, 3 4, 12 4, 6 8, 9 5, 12 6, 7 9, 10 1, 2

xxvi c Preface

FACKN3GXML 10/20/2012 1:7:33 Page 27

c

ACKNOWLEDGMENTS

In revising Ethics and Technology for a fourth edition, I have once again drawn from several of my previously published works. Chapters 1–4, on foundational and profes- sional issues in cyberethics, incorporate material from four articles: “The State of Computer Ethics as a Philosophical Field of Inquiry,”Ethics and Information Technology 3, no. 2 (2001); “Applying an Interdisciplinary Approach to Teaching Computer Ethics,” IEEE Technology and Society Magazine 21, no. 3 (2002); “The Uniqueness Debate in Computer Ethics,” Ethics and Information Technology 4, no. 1 (2002); and “Search Engines and Ethics,” Stanford Encyclopedia of Philosophy (2012).

Chapter 5, on privacy in cyberspace, also draws from material in four works: “Computer Matching and Personal Privacy,” Proceedings of the Symposium on Com- puters and the Quality of Life (ACM Press, 1996); “Informational Privacy, Data Mining, and the Internet,” Ethics and Information Technology 1, no. 2 (1999); “Privacy Enhanc- ing Technologies as a Panacea for Online Privacy Concerns: Some Ethical Considera- tions,” Journal of Information Ethics 9, no. 2 (2000); and “Applying the ‘Contextual Integrity’ Model of Privacy to Personal Blogs in the Blogosphere” (coauthored with Frances Grodzinsky), International Journal of Internet Research Ethics 3 (2010). Chapters 6 and 7, on security and crime in cyberspace, draw from material in three sources: “Privacy and Security” in Duncan Langford’s book Internet Ethics (Macmillan/St. Martins, 2000); “Defining the Boundaries of Computer Crime: Piracy, Trespass, and Vandalism in Cyberspace” in Readings in CyberEthics 2nd ed. (Jones and Bartlett, 2004); and “Privacy in ‘the Cloud’” (coauthored with Frances Grodzinsky), Computers and Society 41, no. 1 (2011).

In Chapters 8 and 9, on intellectual property and Internet regulation, I drew from material in “Information Wants to be Shared: An Alternative Approach for Analyzing Intellectual Property Disputes in the Information Age,”Catholic Library World 73, no. 2 (2002); and two papers coauthored with Frances Grodzinsky: “P2P Networks and the Verizon v. RIAA Case,” Ethics and Information Technology 7, no. 4 (2005) and “Online File Sharing: Resolving the Tensions between Privacy and Property” Computers and Society 38, no. 4 (2008). Chapters 10 and 11, on the digital divide, democracy, and online communities, draw from material from two papers: “Ethical Reflections on the Digital Divide,” Journal of Information, Communication and Ethics in Society 1, no. 2 (2003) and “Online Communities, Democratic Ideals, and the Digital Divide” (coauthored with Frances Grodzinsky) in Soraj Hongladarom and Charles Ess’s book Information Tech- nology Ethics: Cultural Perspectives (IGI Global, 2007).

Chapter 12, on emerging and converging technologies, incorporates material from my book Ethics, Computing, and Genomics (Jones and Bartlett, 2006), and from three recently published papers: “CanWeDevelop Artificial Agents Capable of Making Good

xxvii

FACKN3GXML 10/20/2012 1:7:33 Page 28

Moral Decisions?” Minds and Machines 21, no. 3 (2011); “Trust and Multi-Agent Systems” (coauthored with Jeff Buechner), Ethics and Information Technology 13, no. 1 (2011); and “Ethical Aspects of Autonomous Systems” in Michael Decker and Mathias Gutmann’s book Robo- and Information-Ethics (Berlin: Verlag LIT, 2012).

The fourth edition of Ethics and Technology has benefited from suggestions and comments I received from many anonymous reviewers, as well as from the following colleagues: Jeff Buechner, Lloyd Carr, Jerry Dolan, Frances Grodzinsky, Kenneth Himma, James Moor, Martin Menke, Wayne Pauley, Mark Rosenbaum, Regina Tavani, and John Weckert. I am especially grateful to Fran Grodzinsky (Sacred Heart Univer- sity), with whom I have coauthored several papers, for permitting me to incorporate elements of our joint research into relevant sections of this book. And I ammost grateful to Lloyd Carr (Rivier University) for his invaluable feedback on several chapters and sections of this edition of the book, which he was willing to review multiple times; his astute comments and suggestions have helpedme to refinemany of the positions I defend in this book.

The new edition of the book has also benefited from some helpful comments that I received from many students who have used previous editions of the text. I am also grateful to the numerous reviewers and colleagues who commented on the previous editions of this book; many of their helpful suggestions have been carried over to the present edition.

I also wish to thank the editorial and production staffs atWiley and ThomsonDigital, especially Beth Golub, Elizabeth Mills, Katherine Willis, Jolene Ling, and Sanchari Sil, for their support during the various stages of the revision process for the fourth edition of Ethics and Technology.

Finally, I must once again thank the two most important people in my life: my wife Joanne, and our daughter Regina. Without their continued support and extraordinary patience, the fourth edition of this book could not have been completed.

This edition of Ethics and Technology is dedicated to the memory of my grand- parents: Leon and Marian (Roberts) Hutton, and Antonio and Clelia (Giamberardino) Tavani.

Herman T. Tavani Nashua, NH

xxviii c Acknowledgments

FFOR3GXML 10/20/2012 1:8:53 Page 29

c

FOREWORD

The computer/information revolution is shaping our world in ways it has been difficult to predict and to appreciate. When mainframe computers were developed in the 1940s and 1950s, some thought only a few computers would ever be needed in society. When personal computers were introduced in the 1980s, they were considered fascinating toys for hobbyists but not something serious businesses would ever use.WhenWeb tools were initially created in the 1990s to enhance the Internet, they were a curiosity. Using the Web to observe the level of a coffee pot across an ocean was intriguing, at least for a few moments, but not of much practical use. Today, armed with the wisdom of hindsight, the impact of such computing advancements seems obvious, if not inevitable, to all of us. What government claims that it does not need computers?What major business does not have a Web address? How many people, even in the poorest of countries, are not aware of the use of cell phones?

The computer/information revolution has changed our lives and has brought with it significant ethical, social, and professional issues; consider the area of privacy as but one example. Today, surveillance cameras are abundant, and facial recognition systems are effective even under less than ideal observing conditions. Information about buying habits, medical conditions, and human movements can be mined and correlated relent- lessly using powerful computers. Individuals’ DNA information can easily be collected, stored, and transmitted throughout the world in seconds. This computer/information revolution has brought about unexpected capabilities and possibilities. The revolution is not only technological but also ethical, social, and professional. Our computerized world is perhaps not the world we expected, and, even to the extent that we expected it, it is not a world for which we have well-analyzed policies about how to behave. Now more than ever we need to take cyberethics seriously.

Herman Tavani has written an excellent introduction to the field of cyberethics. His text differs from others in at least three important respects: First, the book is extra- ordinarily comprehensive and up to date in its subject matter. The text covers all of the standard topics such as codes of conduct, privacy, security, crime, intellectual property, and free speech, and also discusses sometimes overlooked subjects such as democracy, employment, access, and the digital divide. Tavani more than anyone else has tracked and published the bibliographical development of cyberethics over many years, and his expertise with this vast literature shines through in this volume. Second, the book approaches the subject matter of cyberethics from diverse points of view. Tavani examines issues from a social science perspective, from a philosophical perspective, and from a computing professional perspective, and then he suggests ways to integrate these diverse approaches. If the task of cyberethics is multidisciplinary, as many of us believe, then such a diverse but integrated methodology is crucial to accomplishing

xxix

FFOR3GXML 10/20/2012 1:8:53 Page 30

the task. His book is one of the few that constructs such amethodology. Third, the book is unusually helpful to students and teachers because it contains an entire chapter discussing critical thinking skills and is filled with review and discussion questions.

The cyberage is going to evolve. The future details and applications are, as always, difficult to predict. But it is likely that computing power and bandwidth will continue to grow while computing devices themselves will shrink in size to the nanometer scale. More andmore information devices will be inserted into our environment, our cars, our houses, our clothing, and us. Computers will become smarter. They will be made out of new materials, possibly biological. They will operate in new ways, possibly using quantum properties. The distinction between the virtual world and the real world will blur more andmore.We need a good book in cyberethics to deal with the present and prepare us for this uncertain future. Tavani’s Ethics and Technology is such a book.

James H. Moor Dartmouth College

xxx c Foreword

C013GXML 10/19/2012 20:48:52 Page 1

c

C H A P T E R

1 Introduction to Cyberethics:

Concepts, Perspectives, and Methodological Frameworks

Our primary objective in Chapter 1 is to introduce some foundational concepts and methodological frameworks that will be used in our analysis of specific cyberethics issues in subsequent chapters of this textbook. To accomplish this objective, we

� define key terms such as cyberethics and cybertechnology; � describe key developmental phases in cybertechnology that influenced the

evolution of cyberethics as a distinct field of applied ethics; � consider whether there is anything unique or special about cyberethics issues; � examine three distinct perspectives for identifying and approaching cyberethics

issues; � propose a comprehensive methodological scheme for analyzing cyberethics

issues.

We begin by reflecting briefly on three scenarios, each illustrating a cluster of ethical issues that will be examined in detail in later chapters of this book.

c SCENARIO 1–1:A Fatal Cyberbullying Incident on MySpace

Megan Meier, a 13-year-old resident of Dardenne Prairie, Missouri, had an account on MySpace where she received a “friend” request from a user named Josh Evans. Evans, who claimed to be a 16-year-old boy, told Meier that he lived near her and was being home-schooled by his parents. At first, Evans sent flattering e-mails to Meier, which also suggested that he might be romantically interested in her. Soon, however, Evans’s remarks turned from compliments to insults, and Evans informed Meier that he was no longer sure that he wanted to be friends with her because he heard that she “wasn’t very nice to her friends.” Next, Meier noticed that some highly derogatory posts about her—e.g., “MeganMeier is a slut” and “MeganMeier is fat”—began to appear onMySpace. Meier, who was reported to have suffered from low self-esteem and depression, became increas- ingly distressed by the online harassment (cyberbullying) being directed at her—i.e., from both the

1

C013GXML 10/19/2012 20:48:52 Page 2

First, consider some ethical concerns that arise in the Megan Meier cyberbullying scenario. These include worries affecting anonymity and pseudonymity, deception, crime, legal liability, and moral responsibility. Should Lori Drew, as well as any other MySpace user, have been permitted to open an account on that social networking site (SNS) under an alias or pseudonym that also included a fictitious profile? Should MySpace, or any SNS, tolerate members who deceive, intimidate, or harass other users? Should users who create accounts on SNSs with the intention to deceive or harass others be subject to criminal prosecution? ShouldMySpace have been held legally liable, at least in some contributory sense, for Meier’s death? Also, do ordinary users of an SNS who discover that someone is being bullied in that online forum have a moral responsibility to inform the SNS? Do they also have a moral responsibility to inform that SNS if they

c SCENARIO 1–2: Contesting the Ownership of a Twitter Account

Noah Kravitz was employed by PhoneDog Media, a mobile phone company, for nearly four years. PhoneDog had two divisions: an e-commerce site (phonedog.com) that sold mobile phones, and a blog that enabled customers to interact with the company. Kravitz created a blog on Twitter (called Phonedog_Noah) while employed at PhoneDog, and his blog attracted 17,000 followers by the time he left the company in October 2010. However, Kravitz informed PhoneDog that he wanted to keep his Twitter blog, with all of his followers; in return, Kravitz agreed that he would still “tweet” occasionally on behalf of his former company, under a new (Twitter) “handle,” or account name, NoahKravitz. Initially, PhoneDog seemed to have no problem with this arrangement. In July 2011, however, PhoneDog sued Kravitz, arguing that his list of Twitter followers was, in fact, a company list. PhoneDog also argued that it had invested a substantial amount of money in growing its customer list, which it considered to be the property of PhoneDogMedia. The company (as of early 2012) is seeking $340,000 in damages—the amount that PhoneDog estimated it had lost based on 17,000 customers at $2.50 per customer over an eight-month period (following Kravitz’s departure from the company).2 &

insulting MySpace postings and hurtful e-mail messages she continued to receive from Evans. On October 17, 2006,Meier decided to end her life by hanging herself in her bedroom.An investigation of this incident, following Meier’s death, revealed that Josh Evans was not a teenage boy; she was Lori Drew, the 49-year-old mother of a former friend of Meier’s.1 &

c SCENARIO 1–3: “The Washingtonienne” Blogger

Jessica Cutler, a former staff assistant to U.S. Senator Michael DeWine (R-Ohio), authored an online diary (on blogger.com) under the pseudonym “The Washingtonienne.” In May 2004, she was fired when the contents of her diary appeared in Wonkette: The DC Gossip, a popular blog in the Washington D.C. area. Until her diary was discovered and published in Wonkette, Cutler assumed that it had been viewed by only a few of her fellow “staffers” (Washington D.C. staff assistants) who were interested in reading about the details of her romantic relationships and sexual encounters. In her diary, Cutler disclosed that she earned an annual salary of only $25,000 as a staffer and that most of her living expenses were “thankfully subsidized by a few generous older gentlemen.” She also described some details of her sexual relationships with these men, one of whom was married and an official in the George W. Bush administration. (Cutler did not use the real names of these men but instead referred to them via initials that could easily be linked to their actual identities.) Following her termination as a staffer, in response to the political fallout and the media attention resulting from the publication of her diary, Cutler was offered a book contract with a major publisher. She was also subsequently sued by one of the men implicated in her blog.3 &

2 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

http://www.phonedog.com
http://www.blogger.com
C013GXML 10/19/2012 20:48:52 Page 3

discover that someone has created a fraudulent account on their forum, which could be used to deceive and harass other members? These and similar questions are examined in detail in Chapters 7 and 11.

Next, consider the scenario involving Twitter. Here, several important ethical, legal, and policy issues also arise—especially with respect to intellectual property rights and ownership of information. For example, can an employer’s customer list constitute a “trade secret,” as PhoneDog claimed? Should an employee be authorized to create a single Twitter account in which the followers are simultaneously interested both in the employer’s product and in the employee’s (private) blog? Should employees be allowed to post to their private accounts on SNSs, such as Twitter or Facebook, during work hours or, for that matter, whenever/wherever they are using an employer’s computing resources? If so, who has legal ownership rights to that information? A different, but somewhat related, question has to do with whether ordinary users should be able to post on their private SNS accounts anything they wish to say about their current or former employers, without first getting explicit permission to do so. Questions pertaining to these and related issues are examined in Chapters 8 and 9.

Third, consider “The Washingtonienne” scenario, where a wide range of ethical and legal issues also arise. These include concerns affecting privacy, confidentiality, anonym- ity, free speech, defamation, and so forth. For example, did Cutler violate the privacy and confidentiality of her romantic partners through the remarks she made about them in her online diary? Should she be held liable for defamation because of the nature of her remarks about these individuals, or was she merely exercising her right to free speech? Was Cutler’s expectation of anonymity violated when she was eventually “outed” by Wonkette, or were the circumstances surrounding this incident no different from that of any author or journalist who writes under a pseudonym but whose real identity is eventually discovered and made public? Should Cutler’s online diary be considered a “public document”merely because it was on theWeb, or did her diary also deserve some privacy protection because of the limited scope of its intended audience? Answers to these and related questions affecting blogs and the “blogosphere” are examined in Chapters 5, 9, and 11.

The Meier, Twitter, and Washingtonienne scenarios provide us with particular contexts in which we can begin to think about a cluster of ethical issues affecting the use of computers and cybertechnology. A number of alternative examples could also have been used to illustrate many of the moral and legal concerns that arise in connection with this technology. In fact, examples abound. One has only to read a daily newspaper or view regular television news programs to be informed about controversial issues involving computers and the Internet, including questions that pertain to property, privacy, security, anonymity, crime, and jurisdiction. Ethical aspects of these issues are examined in the chapters comprising this textbook. In the remainder of Chapter 1, we identify and examine some key foundational concepts and frameworks in cyberethics.

c 1.1 DEFINING KEY TERMS: CYBERETHICS AND CYBERTECHNOLOGY

Before we propose a definition of cyberethics, it is important to note that the field of cyberethics can be viewed as a branch of (applied) ethics. In Chapter 2, where we define ethics as “the study of morality,” we provide a detailed account of what is meant by

1.1 Defining Key Terms: Cyberethics and Cybertechnology b 3

C013GXML 10/19/2012 20:48:52 Page 4

morality and a moral system, and we also focus on some important aspects of theoretical, as opposed to, applied ethics. For example, both ethical concepts and ethical theories are also examined in detail in that chapter. There, we also include a “Getting Started” section on how to engage in ethical reasoning in general, as well as reasoning in the case of some specific moral dilemmas. In Chapter 1, however, our main focus is on clarifying some key cyber and cyber-related terms that will be used throughout the remaining chapters of this textbook.

For our purpose, cyberethics can be defined as the study of moral, legal, and social issues involving cybertechnology. Cyberethics examines the impact of cybertechnology on our social, legal, and moral systems, and it evaluates the social policies and laws that have been framed in response to issues generated by its development and use. To grasp the significance of these reciprocal relationships, it is important to understand what is meant by the term cybertechnology.

1.1.1 What Is Cybertechnology?

Cybertechnology, as used throughout this textbook, refers to a wide range of computing and communication devices, from stand-alone computers to connected, or networked, computing and communication technologies. These technologies include, but need not be limited to, devices such as “smart” phones, iPods, (electronic) “tablets,” personal computers (desktops and laptops), and large mainframe computers. Networked devices can be connected directly to the Internet, or they can be connected to other devices through one or more privately owned computer networks. Privately owned networks, in turn, include local area networks (LANs) and wide area networks (WANs). A LAN is a privately owned network of computers that span a limited geographical area, such as an office building or a small college campus. WANs, on the other hand, are privately owned networks of computers that are interconnected throughout a much broader geographic region.

How exactly are LANs and WANs different from the Internet? In one sense, the Internet can be understood as the network of interconnected computer networks. A synthesis of contemporary information and communications technologies, the Internet evolved from an earlier United States Defense Department initiative (in the 1960s) known as the ARPANET. Unlike WANs and LANs, which are privately owned computer networks, the Internet is generally considered to be a public network, in the sense that much of the information available on the Internet resides in “public space” and is thus available to anyone. The Internet, which should be differentiated from the World Wide Web, includes several applications. The Web, based on hypertext transfer protocol (HTTP), is one application; other applications include file transfer protocol (FTP), Telnet, and e-mail. Because many users navigate the Internet by way of the Web, and because the majority of users conduct their online activities almost exclusively on the Web portion of the Internet, it is very easy to confuse the Web with the Internet.

The Internet and privately owned computer networks, such asWANs and LANs, are perhaps the most common and well-known examples of cybertechnology. However, “cybertechnology” is used in this book to represent the entire range of computing systems, from stand-alone computers to privately owned networks to the Internet itself. “Cyberethics” refers to the study of moral, legal, and social issues involving those technologies.

4 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:52 Page 5

1.1.2 Why the Term Cyberethics?

Many authors have used the term “computer ethics” to describe the field that examines moral issues pertaining to computing and information technology (see, for example, Barger 2008; Johnson 2010). Others use the expression “information ethics” (e.g., Capurro 2007) to refer to a cluster of ethical concerns regarding the flow of information that is either enhanced or restricted by computer technology.4 Because of concerns about ethical issues involving the Internet in particular, some have also used the term “Internet ethics” (Langford 2000). Ethical issues examined in this textbook, however, are not limited to the Internet; they also include privately owned computer networks and interconnected communication technologies—i.e., technologies that we refer to collect- ively as cybertechnology. Hence, we use “cyberethics” to capture the wide range of moral issues involving cybertechnology.

For our purposes, “cyberethics” is more accurate than “computer ethics” for two reasons. First, the term “computer ethics” can connote ethical issues associated with computing machines, and thus could be construed as pertaining to stand-alone or “unconnected computers.” Because computing technologies and communication tech- nologies have converged in recent years, resulting in networked systems, a computer system may now be thought of more accurately as a new kind of medium than as a machine. Second, the term “computer ethics” might also suggest a field of study that is concerned exclusively with ethical issues affecting computer professionals. Although these issues are very important, and are examined in detail in Chapter 4 as well as in relevant sections of Chapters 6 and 12, we should note that the field of cyberethics is not limited to an analysis of moral issues that affect only professionals.

“Cyberethics” is also more accurate, for our purposes, than “information ethics.” For one thing, “information ethics” is ambiguous because it can mean a specific methodo- logical framework—i.e., Information Ethics (or IE)—for analyzing issues in cyberethics (Floridi 2007).5 Or it can connote a cluster of ethical issues of particular interest to professionals in the fields of library science and information science (Buchanan and Henderson 2009). In the latter sense, “information ethics” refers to ethical concerns affecting the free flow of, and unfettered access to, information, which include issues such as library censorship and intellectual freedom. (These issues are examined in Chapter 9.) Our analysis of cyberethics issues in this text, however, is not limited to controversies often considered under the heading “information ethics.”

Given the wide range of moral issues examined in this book, the term “cyberethics” is also more comprehensive, and more appropriate, than “Internet ethics.”Although many of the issues considered under the heading cyberethics often pertain to the Internet, some issues examined in this textbook do not involve the Internet per se—for example, issues associated with computerized monitoring in the workplace, with professional responsi- bility for designing reliable computer hardware and software systems, and with the implications of cybertechnology for gender and race. We examine ethical issues that cut across the spectrum of devices and networked communication systems comprising cybertechnology, from stand-alone computers to networked systems.

Finally, we should note that some issues in the emerging fields of “agent ethics,” “bot ethics,” “robo-ethics,” or what Wallach and Allen (2009) call “machine ethics,” overlap with a cluster of concerns examined under the heading of cyberethics. Wallach and Allen define machine ethics as a field that expands upon traditional computer ethics because it

1.1 Defining Key Terms: Cyberethics and Cybertechnology b 5

C013GXML 10/19/2012 20:48:52 Page 6

shifts the main area of focus away from “what people do with computers to questions about what machines do by themselves.” It also focuses on questions having to do with whether computers can be autonomous agents capable of making good moral decisions. Research in machine ethics overlaps with the work of interdisciplinary researchers in the field of artificial intelligence (AI).6 We examine some aspects of this emerging field (or subfield of cyberethics) in Chapters 11 and 12.

c 1.2 THE CYBERETHICS EVOLUTION: FOUR DEVELOPMENTAL PHASES IN CYBERTECHNOLOGY

In describing the key evolutionary phases of cybertechnology and cyberethics, we begin by noting that the meaning of “computer” has evolved significantly since the 1940s. If you were to look up the meaning of that word in a dictionary written before World War II, you would most likely discover that a computer was defined as a person who calculated numbers. In the time period immediately following World War II, the term “computer” came to be identified with a (calculating) machine as opposed to a person (who calculated).7 By the 1980s, however, computers had shrunk in size considerably and they were beginning to be understood more in terms of desktop machines (that manipulated symbols as well as numbers), or as a new kind of medium for communica- tion, rather than simply as machines that crunch numbers. As computers became increasingly connected to one another, they came to be associated with metaphors such as the “information superhighway” and cyberspace; today, many ordinary users tend to think about computers in terms of various Internet- and Web-based applications made possible by cybertechnology.

In response to some social and ethical issues that were anticipated in connection with the use of electronic computers, the field that we now call cyberethics had its informal and humble beginnings in the late 1940s. It is interesting to note that during this period, when ENIAC (Electronic Numerical Integrator and Calculator)—the first electronic com- puter, developed at the University of Pennsylvania, became operational in 1946—some analysts confidently predicted that nomore than five or six computers would ever need to be built. It is also interesting to point out that during this same period, a few insightful thinkers had already begun to describe some social and ethical concerns that would likely arise in connection with computing and cybertechnology.8 Although still a relatively young academic field, cyberethics has now matured to a point where several articles about its historical development have appeared in books and scholarly journals. For our purposes, the evolution of cyberethics can be summarized in four distinct technological phases.9

Phase 1 (1950s and 1960s) In Phase 1, computing technology consisted mainly of huge mainframe computers, such as ENIAC, that were “unconnected” and thus existed as stand-alone machines. One set of ethical and social questions raised during this phase had to do with the impact of computing machines as “giant brains.” Today, we might associate these kinds of questions with the field of artificial intelligence (or AI). The following kinds of questions were introduced in Phase 1: Can machines think? If so, should we invent thinking

6 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:52 Page 7

machines? If machines can be intelligent entities, what does this mean for our sense of self? What does it mean to be human?

Another set of ethical and social concerns that arose during Phase 1 could be catalogued under the heading of privacy threats and the fear of Big Brother. For example, some people in the United States feared that the federal government would set up a national database in which extensive amounts of personal information about its citizens would be stored as electronic records. A strong centralized government could then use that information to monitor and control the actions of ordinary citizens. Although networked computers had not yet come on to the scene, work on the ARPANET—the Internet’s predecessor, which was funded by an agency in the United States Defense Department—began during this phase, in the 1960s.

Phase 2 (1970s and 1980s) In Phase 2, computing machines and communication devices in the commercial sector began to converge. This convergence, in turn, introduced an era of computer/commu- nications networks. Mainframe computers, minicomputers, microcomputers, and personal computers could now be linked together by way of one or more privately owned computer networks such as LANs and WANs (see Section 1.1.1), and infor- mation could readily be exchanged between and among databases accessible to networked computers.

Ethical issues associated with this phase of computing included concerns about personal privacy, intellectual property, and computer crime. Privacy concerns, which had emerged during Phase 1 because of worries about the amount of personal information that could be collected by government agencies and stored in a centralized government- owned database, were exacerbated because electronic records containing personal and confidential information could now also easily be exchanged between two or more commercial databases in the private sector. Concerns affecting intellectual property and proprietary information also emerged during this phase because personal (desktop) computers could be used to duplicate proprietary software programs. And concerns associated with computer crime appeared during this phase because individuals could now use computing devices, including remote computer terminals, to break into and disrupt the computer systems of large organizations.

Phase 3 (1990–Present) During Phase 3, the Internet era, availability of Internet access to the general public has increased significantly. This was facilitated, in no small part, by the development and phenomenal growth of the World Wide Web in the 1990s. The proliferation of Internet- and Web-based technologies has contributed to some additional ethical concerns involving computing technology; for example, issues of free speech, anonymity, jurisdic- tion, and trust have been hotly disputed during this phase. Should Internet users be free to post any messages they wish on publicly accessible Web sites or even on their own personal Web pages—i.e., is that a “right” that is protected by free speech or freedom of expression? Should users be permitted to post anonymous messages on Web pages, or even be allowed to navigate the Web anonymously or under the cover of a pseudonym?

Issues of jurisdiction also arose because there are no clear national or geographical boundaries in cyberspace; if a crime occurs on the Internet, it is not always clear where—i.e., in which legal jurisdiction—it took place and thus it is unclear where it

1.2 The Cyberethics Evolution: Four Developmental Phases in Cybertechnology b 7

C013GXML 10/19/2012 20:48:52 Page 8

should be prosecuted. And as e-commerce emerged during this phase, potential consumers initially had concerns about trusting online businesses with their financial and personal information. Other ethical and social concerns that arose during Phase 3 include disputes about the public vs. private aspects of personal information that has become increasingly available on the Internet. Concerns of this type have been exacerbated by the amount of personal information included on social networking sites, such as Facebook and Twitter, and on other kinds of interactive Web-based forums (made possible by “Web 2.0” technology).

We should note that during Phase 3, both the interfaces used to interact with computer technology and the devices used to “house” it were still much the same as in Phases 1 and 2. A computer was still essentially a “box,” i.e., a CPU, with one or more peripheral devices, such as a video screen, keyboard, and mouse, serving as interfaces to that box. And computers were still viewed as devices essentially external to humans, as things or objects “out there.” As cybertechnology continues to evolve, however, it may no longer make sense to try to understand computers simply in terms of objects or devices that are necessarily external to us. Instead, computers will likely become more and more a part of who or what we are as human beings. For example, James Moor (2005) notes that computing devices will soon be a part of our clothing and even our bodies. This brings us to Phase 4.

Phase 4 (Present–Near Future) Presently we are on the threshold of Phase 4, a point at which we have begun to experience an unprecedented level of convergence of technologies. We have already witnessed aspects of technological convergence beginning in Phase 2, where the integra- tion of computing and communication devices resulted in privately owned networked systems, as we noted above. And in Phase 3, the Internet era, we briefly described the convergence of text, video, and sound technologies on the Web, and we noted how the computer began to be viewedmuchmore as a new kind of medium than as a conventional type of machine. The convergence of information technology and biotechnology in recent years has resulted in the emerging fields of bioinformatics and computational genomics; this has also caused some analysts to question whether computers of the future will still be silicon-based or whether some may also possibly be made of biological materials. Additionally, biochip implant technology, which has been enhanced by developments in AI research (described in Chapter 11), has led some to predict that in the not-too-distant future it may become difficult for us to separate certain aspects of our biology from our technology.

Today, computers are also becoming ubiquitous or pervasive; i.e., they are “every- where” and they permeate both our workplace and our recreational environments. Many of the objects that we encounter in these environments are also beginning to exhibit what Philip Brey (2005) and others call “ambient intelligence,” which enables “smart objects” to be connected to one another via wireless technology. Some consider radio frequency identification (RFID) technology (described in detail in Chapter 5) to be the first step in what is now referred to as pervasive or ubiquitous computing (described in detail in Chapter 12).

What other kinds of technological changes should we anticipate as research and development continues in Phase 4? For one thing, computing devices will likely continue to become more and more indistinguishable from many kinds of noncomputing devices.

8 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:52 Page 9

For another thing, a computer may no longer typically be conceived of as a distinct device or object with which users interact via an explicit interface such as a keyboard, mouse, and video display. We are now beginning to conceive of computers and cybertechnology in drastically different ways. Consider also that computers are becoming less visible—as computers and electronic devices continue to be miniaturized and integrated/embedded in objects, they are also beginning to “disappear” or to become “invisible” as distinct entities.

Many analysts predict that computers will become increasingly smaller in size, ultimately achieving the nano scale. (We examine some ethical implications of nano- technology and nanocomputing in Chapter 12.) Many also predict that aspects of nanotechnology, biotechnology, and information technology will continue to converge. However, we will not speculate any further in this chapter about either the future of cybertechnology or the future of cyberethics. The purpose of our brief description of the four phases of cybertechnology mentioned here is to provide a historical context for understanding the origin and evolution of at least some of the ethical concerns affecting cybertechnology that we will examine in this book.

Table 1.1 summarizes key aspects of each phase in the development of cyberethics as a field of applied ethics.

c 1.3 ARE CYBERETHICS ISSUES UNIQUE ETHICAL ISSUES?

Few would dispute the claim that the use of cybertechnology has had a significant impact on ourmoral, legal, and social systems. Some also believe, however, that cybertechnology has introduced new and unique moral problems. Are any of these problems genuinely unique moral issues? There are two schools of thought regarding this question.

Consider once again the three scenarios included in the chapter’s opening section. Have any new ethical issues been introduced in these scenarios, or are the issues that arise in each merely examples of existing ethical issues that have been exacerbated in some

TABLE 1.1 Summary of Four Phases of Cyberethics

Phase Time Period Technological Features Associated Issues

1 1950s–1960s Stand-alone machines (large mainframe computers)

Artificial intelligence (AI), database privacy (“Big Brother”)

2 1970s–1980s Minicomputers and the ARPANET; desktop computers interconnected via privately owned networks

Issues from Phase 1 plus concerns involving intellectual property and software piracy, computer crime, and communications privacy

3 1990s–present Internet, World Wide Web, and early “Web.2.0” applications, environments, and forums

Issues from Phases 1 and 2 plus concerns about free speech, anonymity, legal jurisdiction, behavioral norms in virtual communities

4 Present to near future

Convergence of information and communication technologies with nanotechnology and biotechnology; increasing use of autonomous systems

Issues from Phases 1–3 plus concerns about artificial electronic agents (“bots”) with decision-making capabilities, and developments in nanocomputing, bioinformatics, and ambient intelligence

1.3 Are Cyberethics Issues Unique Ethical Issues? b 9

C013GXML 10/19/2012 20:48:52 Page 10

sense by new technologies used to communicate and disseminate personal information (such as in blogs in the Washingtonienne and Twitter scenarios), or to harass and bully someone (as in the Meier scenario)? To see whether any new ethical issues arise because of cybertechnology in general, consider once again the cyberbullying scenario involving Megan Meier. Here, one could argue that there is nothing really new or unique in the bullying incident that led to Meier’s death, because in the final analysis “bullying is bullying” and “crime is crime.” According to this line of reasoning, whether someone happens to use cybertechnology to assist in carrying out a particular bullying incident is irrelevant. One might further argue that there is nothing special about cyberbullying incidents in general, regardless of whether or not they also result in a victim’s death. Proponents of this position could point to the fact that bullying activities are hardly new, since these kinds of activities have been carried out in the off-line world for quite some time. So, cybertechnology might be seen simply as the latest in a series of tools or techniques that are now available to aid bullies in carrying out their activities.

Alternatively, some argue that forms of behavior made possible by cybertechnology have indeed raised either new or special ethical problems. Using the example of cyberbullying to support this view, one might point out the relative ease with which bullying activities can now be carried out. Simply by using a computing device with Internet access, one can bully others without having to leave the comfort of his or her home. A cyberbully can, as Lori Drew did, also easily deceive her victim under the cloak of an alias, or pseudoname. The fact that a user can bully a victim with relative anonymity makes it much more difficult for law enforcement agents to track down a bully, either before or after that bully has caused harm to the victim(s).

Also consider issues having to do with scope and scale: an Internet user can bully multiple victims simultaneously via the use of multiple “windows” on his or her computer screen or electronic device. The bully can also harass victims who happen to live in states and nations that are geographically distant from the bully. Bullying activities can now occur on a scale or order of magnitude that could not have been realized in the pre- Internet era. More individuals can now engage in bullying behavior because cybertech- nology has made it easy, and, as a result, significantly more people can now become the victims of bullies.

But do these factors support the claim that cybertechnology has introduced any new and unique ethical issues? Walter Maner (2004) argues that computer use has generated a series of ethical issues that (a) did not exist before the advent of computing, and (b) could not have existed if computer technology had not been invented.10 Is there any evidence to support Maner’s claim? Next we consider two scenarios that, initially at least, might suggest that some new ethical issues have been generated by the use of cybertechnology.

c SCENARIO 1–4:Developing the Code for a Computerized Weapon System

Sally Bright, a recent graduate from Technical University, has accepted a position as a software engineer for a company called CyberDefense, Inc. This company has a contract with the U.S. Defense Department to develop and deliver applications for the U.S. military. When Sally reports to work on her first day, she is assigned to a controversial project that is developing the software for a computer system designed to deliver chemical weapons to and from remote locations. Sally is conflicted about whether she can, given her personal values, agree to work on this kind of weapon- delivery system, which would not have been possible without computer technology. &

10 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:52 Page 11

Is the conflict that Sally faces in this particular scenario one that is new or unique because of computers and cybertechnology? One might argue that the ethical concerns surrounding Sally’s choices are unique because they never would have arisen had it not been for the invention of computer technology. In one sense, it is true that ethical concerns having to do with whether or not one should participate in developing a certain kind of computer system did not exist before the advent of computing technology. However, it is true only in a trivial sense. Consider that long before computing technologies were available, engineers were confronted with ethical choices involving whether or not to participate in the design and development of certain kinds of controversial technological systems. Prior to the computer era, for example, they had to make decisions involving the design of aircraft intended to deliver conventional as well as nuclear bombs. So, is the fact that certain technological systems happen to include the use of computer software or computer hardware componentsmorally relevant in this scenario?Haveanyneworunique ethical issues, in a nontrivial sense of “unique,” been generated here? Based on our brief analysis of this scenario, there does not seem to be sufficient evidence to substantiate the claim that one or more new ethical issues have been introduced.

Is Harry’s ethical conflict one that is unique to computers and cybertechnology? Are the ethical issues surrounding Harry’s situation new and thus unique to cybertechnology, because the practice of downloading digital media from the Internet—a practice that many in the movie and recording industries call “digital piracy”—would not have been possible if computer technology had not been invented in the first place? If so, this claim would, once again, seem to be true only in a trivial sense. The issue of piracy itself as a moral concern existed before the widespread use of computer technology. For example, people were able to “pirate” audio cassette tapes simply by using two ormore analog tape recorders to make unauthorized copies of proprietary material. The important point to note here is that moral issues surrounding the pirating of audio cassette tapes are, at bottom, the same issues underlying the pirating of digital media. They arise in each case because, fundamentally, the behavior associated with unauthorized copying raises moral concerns about property, fairness, rights, and so forth. So, as in Scenario 1–4, there seems to be insufficient evidence to suggest that the ethical issues associated with digital piracy are either new or unique in some nontrivial sense.

1.3.1 Distinguishing between Unique Technological Features and Unique Ethical Issues

Based on our analysis of the two scenarios in the preceding section, we might conclude that there is nothing new or special about the kinds of moral issues associated with

c SCENARIO 1–5:Digital Piracy

Harry Flick is an undergraduate student at Pleasantville State College. In many ways, Harry’s interests are similar to those of typical students who attend his college. But Harry is also very fond of classic movies, especially films that weremade before 1950. DVD copies of thesemovies are difficult to find; those that are available tend to be expensive to purchase, and very few are available for loan at libraries. One day, Harry discovers a Web site that has several classic films (in digital form) freely available for downloading. Since the movies are still protected by copyright, however, Harry has some concerns about whether it would be permissible for him to download any of these films (even if only for private use). &

1.3 Are Cyberethics Issues Unique Ethical Issues? b 11

C013GXML 10/19/2012 20:48:52 Page 12

cybertechnology. In fact, some philosophers have argued that we have the same old ethical issues reappearing in a new guise. But is such a view accurate?

If we focus primarily on the moral issues themselves as moral issues, it would seem that perhaps there is nothing new. Cyber-related concerns involving privacy, property, free speech, etc., can be understood as specific expressions of core (traditional) moral notions, such as autonomy, fairness, justice, responsibility, and respect for persons. However, if instead we focus more closely on cybertechnology itself, we see that there are some interesting and possibly unique features that distinguish this technology from earlier technologies. Maner has argued that computing technology is “uniquely fast,” “uniquely complex,” and “uniquely coded.” But even if cybertechnology has these unique features, does it necessarily follow that any of the moral questions associated with that technology must also be unique? One would commit a logical fallacy if he or she concluded that cyberethics issues must be unique simply because certain features or aspects of cybertechnology are unique. The fallacy can be expressed in the following way:

PREMISE 1. Cybertechnology has some unique technological features.

PREMISE 2. Cybertechnology has generated some ethical concerns.

CONCLUSION. At least someethical concerns generatedby cybertechnologymust be unique ethical concerns.

As we will see in Chapter 3, this reasoning is fallacious because it assumes that characteristics that apply to a certain technology must also apply to ethical issues generated by that technology.11

1.3.2 An Alternative Strategy for Analyzing the Debate about the Uniqueness of Cyberethics Issues

Although it may be difficult to prove conclusively whether or not cybertechnology has generated any new or unique ethical issues, wemust not rule out the possibility that many of the controversies associated with this technology warrant special consideration from an ethical perspective. But what, exactly, is so different about issues involving computers and cybertechnology that makes them deserving of special moral consideration? James Moor (2007) points out that computer technology, unlike most previous technologies, is “logically malleable”; it can be shaped and molded to perform a variety of functions. Because noncomputer technologies are typically designed to perform some particular function or task, they lack the universal or general-purpose characteristics that comput- ing technologies possess. For example, microwave ovens and DVD players are techno- logical devices that have been designed to perform specific tasks. Microwave ovens cannot be used to view DVDs, and DVD players cannot be used to defrost, cook, or reheat food. However, a computer, depending on the software used, can perform a range of diverse tasks: it can be instructed to behave as a video game, a word processor, a spreadsheet, a medium to send and receive e-mail messages, or an interface toWeb sites. Hence, cybertechnology is extremely malleable.

12 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:52 Page 13

Moor points out that because of its logical malleability, cybertechnology can generate “new possibilities for human action” that appear to be limitless. Some of these possibilities for action generate what Moor calls “policy vacuums,” because we have no explicit policies or laws to guide new choices made possible by computer technology. These vacuums, in turn, need to be filled with either new or revised policies. But what, exactly, does Moor mean by “policy”? Moor (2004) defines policies as “rules of conduct, ranging from formal laws to informal, implicit guidelines for actions.”12 Viewing com- puter ethics issues in terms of policies is useful, Moor believes, because policies have the right level of generality to consider when we evaluate the morality of conduct. As noted, policies can range from formal laws to informal guidelines. Moor also notes that policies can have “justified exemptions” because they are not absolute; yet policies usually imply a certain “level of obligation” within their contexts.

What action is required to resolve a policy vacuum when it is discovered? Initially, a solution to this problem might seem quite simple and straightforward. We might assume that all we need to do is identify the vacuums that have been generated and then fill them with policies and laws. However, this will not always work, because sometimes the new possibilities for human action generated by cybertechnology also introduce “conceptual vacuums,” or what Moor calls “conceptual muddles.” In these cases, we must first eliminate the muddles by clearing up certain conceptual confusions before we can frame coherent policies and laws.

1.3.3 A Policy Vacuum in Duplicating Computer Software

A critical policy vacuum, which also involved a conceptual muddle, emerged with the advent of personal desktop computers (henceforth referred to generically as PCs). The particular vacuum arose because of the controversy surrounding the copying of software. When PCs became commercially available, many users discovered that they could easily duplicate software programs. They found that they could use their PCs to make copies of proprietary computer programs such as word processing programs, spreadsheets, and video games. Some users assumed that in making copies of these programs they were doing nothing wrong. At that time there were no explicit laws to regulate the subsequent use and distribution of software programs once they had been legally purchased by an individual or by an institution. Although it might be difficult to imagine today, at one time software was not clearly protected by either copyright law or the patent process.

Of course, there were clear laws and policies regarding the theft of physical property. Such laws and policies protected against the theft of personal computers as well as against the theft of a physical disk drive residing in a PC on which the proprietary software programs could easily be duplicated. However, this was not the case with laws and policies regarding the “theft,” or unauthorized copying, of software programs that run on computers. Although there were intellectual property laws in place, it had not been determined that software was or should be protected by intellectual property (IP) law: It was unclear whether software should be understood as an idea (which is not protected by IP law), as a form of writing protected by copyright law, or as a set of machine instructions protected by patents. Consequently, many entrepreneurs who designed and manufac- tured software programs argued for explicit legal protection for their products. A policy vacuum arose with respect to duplicating software: Could a user make a backup copy of a

1.3 Are Cyberethics Issues Unique Ethical Issues? b 13

C013GXML 10/19/2012 20:48:52 Page 14

program for herself? Could she share it with a friend? Could she give the original program to a friend? A clear policy was needed to fill this vacuum.

Before we can fill the vacuum regarding software duplication with a coherent policy or law, we first have to resolve a certain conceptual muddle by answering the question: what, exactly, is computer software?Until we can clarify the concept of software itself, we cannot frame a coherent policy as to whether or not we should allow the free duplication of software. Currently there is still much confusion, as well as considerable controversy, as to how laws concerning the exchange (and, in effect, duplication) of proprietary software over the Internet should be framed.

In Moor’s scheme, how one resolves the conceptual muddle (or decides the conceptual issue) can have a significant effect on which kinds of policies are acceptable. Getting clear about the conceptual issues is an important first step, but it is not a sufficient condition for being able to formulate a policy. Finally, the justification of a policy requires much factual knowledge, as well as an understanding of normative and ethical principles.

Consider the controversies surrounding the original Napster Web site and the Recording Industry Association of America (RIAA), in the late 1990s, regarding the free exchange of music over the Internet. Proponents on both sides of this dispute experienced difficulties in making convincing arguments for their respective positions due, in no small part, to confusion regarding the nature and the status of information (digitized music in the form of MP3 files) being exchanged between Internet users and the technology (P2P systems) that facilitated this exchange. Although cybertechnology has made it possible to exchange MP3 files, there is still debate, and arguably a great deal of confusion as well, about whether doing so should necessarily be illegal. Until the conceptual confusionsormuddlesunderlyingargumentsused in theNapster vs.RIAAcase inparticular, and about the nature of P2P file-sharing systems in general, are resolved, it is difficult to frame an adequate policy regarding the exchange of MP3 files in P2P transactions.

How does Moor’s insight that cyberethics issues need to be analyzed in terms of potential policy vacuums and conceptual muddles contribute to our earlier question as to whether there is anything unique or special about cyberethics? First, we should note that Moor takes no explicit stance on the question as to whether any cyberethics issues are unique. However, he does argue that cyberethics issues deserve special consideration because of the nature of cybertechnology itself, which is significantly different from alternative technologies in terms of the vast number of policy vacuums it generates (Moor 2001). So, even though the ethical issues associated with cybertechnology—that is, issues involving privacy, intellectual property, and so forth—might not be new or unique, they nonetheless can put significant pressure on our conceptual frameworks and normative reasoning to a degree not found in other areas of applied ethics. Thus it would seem to follow, on Moor’s line of reasoning, that an independent field of applied ethics that focuses on ethical aspects of cybertechnology is indeed justified.

c 1.4 CYBERETHICS AS A BRANCHOF APPLIED ETHICS: THREE DISTINCT PERSPECTIVES

Cyberethics, as a field of study, can be understood as a branch of applied ethics. Applied ethics, as opposed to theoretical ethics, examines practical ethical issues. It does so by analyzing those issues from the vantage point of one or more ethical theories. Whereas

14 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:52 Page 15

ethical theory is concerned with establishing logically coherent and consistent criteria in the form of standards and rules for evaluating moral problems, the principal aim of applied ethics is to analyze specific moral problems themselves through the application of ethical theory. As such, those working in fields of applied ethics, or practical ethics, are not inclined to debate some of the finer points of individual ethical theories. Instead, their interest in ethical theory is primarily with how one or more theories can be successfully applied to the analysis of specific moral problems that they happen to be investigating.

For an example of a practical ethics issue involving cybertechnology, consider again the original Napster controversy. Recall that at the heart of this dispute is the question: should proprietary information, in a digital format known as MP3 files, be allowed to be exchanged freely over the Internet? Those advocating the free exchange of MP3 files could appeal to one or more ethical theories to support their position. For example, they might appeal to utilitarianism, an ethical theory that is based on the principle that our policies and laws should be such that they produce the greatest good (happiness) for the greatest number of people. A utilitarian might argue that MP3 files should be distributed freely over the Internet because the consequences of allowing such a practice would make the majority of users happy and would thus contribute to the greatest good for the greatest number of persons affected.

Others might argue that allowing proprietary material to be exchanged freely over the Internet would violate the rights of those who created, and who legally own, the material. Proponents of this view could appeal to a nonutilitarian principle or theory that is grounded in the notion of respecting the rights of individuals. According to this view, an important consideration for an ethical policy is that it protects the rights of individuals—in this case, the rights of those who legally own the proprietary material in question—irrespective of the happiness that might or might not result for the majority of Internet users.

Notice that in our analysis of the dispute over the exchange of MP3 files on the Internet (in the Napster case), the application of two different ethical theories yielded two very different answers to the question of which policy or course of action ought to be adopted. Sometimes, however, the application of different ethical theories to a particular problem will yield similar solutions. We will examine in detail some standard ethical theories, including utilitarianism, in Chapter 2. Our main concern in this textbook is with applied, or practical, ethics issues, and not with ethical theory per se. Wherever appropriate, however, ethical theory will be used to inform our analysis of moral issues involving cybertechnology.

Understanding cyberethics as a field of applied ethics that examines moral issues pertaining to cybertechnology is an important first step. But much more needs to be said about the perspectives that interdisciplinary researchers bring to their analysis of the issues that make up this relatively new field.Most scholars and professionals conducting research in this field of applied ethics have proceeded from one of three different perspectives— professional ethics, philosophical ethics, or sociological/descriptive ethics.13 Gaining a clearer understanding of what is meant by each perspective is useful at this point.

1.4.1 Perspective #1: Cyberethics as a Field of Professional Ethics

According to those who view cyberethics primarily as a branch of professional ethics, the field can best be understood as identifying and analyzing issues of ethical responsibility

1.4 Cyberethics as a Branch of Applied Ethics: Three Distinct Perspectives b 15

C013GXML 10/19/2012 20:48:52 Page 16

for computer and information-technology (IT) professionals. Among the cyberethics issues considered from this perspective are those having to do with the computer/IT professional’s role in designing, developing, and maintaining computer hardware and software systems. For example, suppose a programmer discovers that a software product she has been working on is about to be released for sale to the public even though that product is unreliable because it contains “buggy” software. Should she blow the whistle?

Those who see cyberethics essentially as a branch of professional ethics would likely draw on analogies from other professional fields, such as medicine and law. They would point out that in medical ethics and legal ethics, the principal focus of analysis is on issues of moral responsibility that affect individuals as members of these professions. By analogy, they would go on to argue that the same rationale should apply to the field of cyberethics—i.e., the primary, and possibly even exclusive, focus of cyberethics should be on issues of moral responsibility that affect computer/IT professionals. Don Gotterbarn (1995) can be interpreted as defending a version of this position when he asserts

The only way to make sense of ‘Computer Ethics’ is to narrow its focus to those actions that are within the control of the individualmoral computer professional.14 [Italics Gotterbarn]

So, in this passage, Gotterbarn suggests that the principal focus of computer ethics should be on issues of professional responsibility and not on the broader moral and social implications of that technology.

The analogies Gotterbarn uses to defend his argument are instructive. He notes, for example, that in the past, certain technologies have profoundly altered our lives, especially in the ways that many of us conduct our day-to-day affairs. Consider three such technologies: the printing press, the automobile, and the airplane. Despite the significant and perhaps revolutionary effects of each of these technologies, we do not have “printing press ethics,” “automobile ethics,” or “airplane ethics.” So why, Gotterbarn asks, should we have a field of computer ethics apart from the study of those ethical issues that affect the professionals responsible for the design, development, and delivery of computer systems? In other words, Gotterbarn suggests that it is not the business of computer ethics to examine ethical issues other than those that affect computer professionals.

Professional Ethics and the Computer Science Practitioner Gotterbarn’s view about what the proper focus of computer ethics research and inquiry should be is shared by other practitioners in the discipline of computer science. However, some of those practitioners, as well as many philosophers and social scientists, believe that Gotterbarn’s conception of computer ethics as simply a field of professional ethics is too narrow. In fact, some who identify themselves as computer professionals or as “information professionals,” and who are otherwise sympathetic to Gotterbarn’s overall attention to professional ethics issues, believe that a broader model is needed. For example, Elizabeth Buchanan (2004), in describing the importance of analyzing ethical issues in the “information professions,” suggests that some nonprofessional ethics issues must also be examined because of the significant impact they have on noninformation professionals, including ordinary computer users. Consider that these issues can also affect people who have never used a computer.

Of course, Buchanan’s category of “informational professional” is considerably broader in scope than Gotterbarn’s notion of computer professional. But the central

16 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:53 Page 17

point of her argument still holds, especially in the era of the Internet and theWorldWide Web. In the computing era preceding the Web, Gotterbarn’s conception of computer ethics as a field limited to the study of ethical issues affecting computer professionals seemed plausible. Now, computers are virtually everywhere, and the ethical issues generated by certain uses of computers and cybertechnology affect virtually everyone, professional and nonprofessional alike.

Despite the critiques leveled against Gotterbarn’s conception of the field, his position may turn out to be the most plausible of the three models we consider. Because of the social impact that computer and Internet technologies have had during the past three decades, we have tended to identify many of the ethical issues associated with these technologies, especially concerns affecting privacy and intellectual property, as computer ethics issues. But Deborah Johnson (2000) believes that in the future, computer-related ethical issues, such as privacy and property (that are currently associated with the field of computer ethics), may becomepart of what she calls “ordinary ethics.” In fact, Johnson has suggested that computer ethics, as a separate field of applied ethics, may eventually “go away.”However, even if Johnson’s prediction turns out to be correct, computer ethics as a field that examines ethical issues affecting responsibility for computer professionals will, in all likelihood, still be needed. In this sense, then, Gotterbarn’s original model of computer ethics might turn out to be the correct one in the long term.

Applying the Professional Ethics Model to Specific Scenarios It is fairly easy to see how the professional ethics model can be used to analyze issues involving professional responsibility that directly impact computer/IT professionals. For example, issues concerned with the development and implementation of critical software would fit closely with the professional model. But can that model be extended to include cases that may only affect computer professionals indirectly?

We can ask how some of the issues in the scenarios described earlier in this chapter might be analyzed from the perspective of professional ethics. Consider the Washingtonienne scenario, which initially might seem to be outside the purview of computer ethics vis-�a-vis professional ethics. However, some interesting and controver- sial questions arise that can have implications for computer/IT professionals as well as for professional bloggers. For example, should Internet service providers (ISPs) and SNSs hire programmers to design features that would support anonymity for individuals who post certain kinds of personal information (e.g., personal diaries such as Cutler’s) to blogs aimed at sharing that information with a limited audience, as opposed to blogs whose content is intended to be available to the entire “online world”?Also, should providers of online services be encouraged to include applications that enable bloggers to delete, permanently, some embarrassing information they had entered in a blog in the past—e.g., information that could threaten their current employment (as in the case of Jessica Cutler) or harm their chances or future employment (e.g., if information they had previously posted to a blog were discovered by a prospective employer)? Consider the example of information about oneself that a person might have carelessly posted on Facebook when he or she was a first-year college student; the question of whether an individual’s remarks entered in an online forum should remain there indefinitely (or should be stored in that forum’s database in perpetuity) is one that is now hotly debated.

Another aspect of the professional ethics model as applied to blogs is whether bloggers themselves should be expected to comply with a professional “blogger code of

1.4 Cyberethics as a Branch of Applied Ethics: Three Distinct Perspectives b 17

C013GXML 10/19/2012 20:48:53 Page 18

ethics,” as some have proposed. For example, there are ethical codes of conduct that professional journalists are expected to observe. (We examine professional codes of ethics in detail in Chapter 4.)

Also, consider the Megan Meier scenario. From the vantage point of professional ethics, one might argue that cyberbullying in general and the death of Meier in particular are not the kinds of concerns that are the proper business of computer ethics.We saw that someone such as Gotterbarn might ask why a crime that happened to involve the use of a computer should necessarily be construed as an issue for computer ethics. For example, he notes that a murder that happened to be committed with a surgeon’s scalpel would not be considered an issue for medical ethics. While murders involving the use of a computer, like all murders, are serious moral and legal problems, Gotterbarn seems to imply that they are not examples of genuine computer ethics issues. However, Gotterbarn and the advocates for his position are acutely aware that software developed by engineers can have implications that extend far beyond the computing/IT profession itself.

Many of the ethical issues discussed in this book have implications for computer/IT professionals, either directly or indirectly. Issues that have a direct impact on computer professionals in general, and software engineers in particular, are examined in Chapter 4, which is dedicated to professional ethics. Computer science students and computer professionals will likely also want to assess some of the indirect implications that issues examined in Chapters 5 through 12 also have for the computing profession.

1.4.2 Perspective #2: Cyberethics as a Field of Philosophical Ethics

What, exactly, is philosophical ethics and how is it different from professional ethics? Since philosophical methods and tools are also used to analyze issues involving professional ethics, any attempt to distinguish between the twomight seem arbitrary, perhaps even odd. For our purposes, however, a useful distinction can be drawn between the two fields because of the approach each takes in addressing ethical issues. Whereas professional ethics issues typically involve concerns of responsibility andobligation affecting individuals as members of a certain profession, philosophical ethics issues include broader concerns— social policies as well as individual behavior—that affect virtually everyone in society. Cybertechnology-related moral issues involving privacy, security, property, and free speech can affect everyone, including individuals who have never even used a computer.

To appreciate the perspective of cyberethics as a branch of philosophical ethics, consider James Moor’s classic definition of the field. According to Moor (2007), cyberethics, or what he calls “computer ethics,” is

the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology.15

Two points in Moor’s definition are worth examining more closely. First, computer ethics (i.e., what we call “cyberethics”) is concerned with the social impact of computers and cybertechnology in a broad sense, and not merely the impact of that technology for computer professionals. Secondly, this definition challenges us to reflect on the social impact of cybertechnology in a way that also requires a justification for our social policies.

Why is cyberethics, as a field of philosophical ethics dedicated to the study of ethical issues involving cybertechnology, warranted when there aren’t similar fields of applied ethics for other technologies? Recall our earlier discussion of Gotterbarn’s observation

18 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:53 Page 19

that we do not have fields of applied ethics called “automobile ethics” or “airplane ethics,” even though automobile and airplane technologies have significantly affected our day-to-day lives. Moor could respond to Gotterbarn’s point by noting that the introduction of automobile and airplane technologies did not affect our social policies and norms in the same kinds of fundamental ways that computer technology has. Of course, we have had to modify and significantly revise certain laws and policies to accommodate the implementation of new kinds of transportation technologies. In the case of automobile technology, we had to extend, and in some cases modify, certain policies and laws previously used to regulate the flow of horse-drawn modes of transpor- tation. And clearly, automobile and airplane technologies have revolutionized trans- portation, resulting in our ability to travel faster and farther than was possible in previous eras.

What hasmade the impact of computer technology significantly different from that of othermodern technologies?We have already seen that forMoor, three factors contribute to this impact: logical malleability, policy vacuums, and conceptual muddles. Because cybertechnology is logically malleable, its uses often generate policy vacuums and conceptual muddles. In Section 1.3.2 we saw how certain kinds of conceptual muddles contributed to some of the confusion surrounding software piracy issues in general, and the Napster controversy in particular. What implications do these factors have for the standard methodology used by philosophers in the analysis of applied ethics issues?

Methodology and Philosophical Ethics Philip Brey (2004) notes that the standard methodology used by philosophers to conduct research in applied ethics has three distinct stages in that an ethicist must

1. identify a particular controversial practice as a moral problem,

2. describe and analyze the problem by clarifying concepts and examining the factual data associated with that problem,

3. apply moral theories and principles in the deliberative process in order to reach a position about the particular moral issue.16

We have already noted (in Section 1.3) how the first two stages in this methodology can be applied to an analysis of ethical issues associated with digital piracy. We saw that, first, a practice involving the use of cybertechnology to “pirate” or make unauthorized copies of proprietary information was identified as morally controversial. At the second stage, the problemwas analyzed in descriptive and contextual terms to clarify the practice and to situate it in a particular context. In the case of digital piracy, we saw that the concept of piracy could be analyzed in terms of moral issues involving theft and intellectual property theory. When we describe and analyze problems at this stage, we will want to be aware of and address any policy vacuums and conceptual muddles that are relevant.

At the third and final stage, the problem must be deliberated over in terms of moral principles (or theories) and logical arguments. Brey describes this stage in the method as the “deliberative process.”Here, various arguments are used to justify the application of particular moral principles to the issue under consideration. For example, issues involv- ing digital piracy can be deliberated upon in terms of one or more standard ethical theories, such as utilitarianism (defined in Chapter 2).

1.4 Cyberethics as a Branch of Applied Ethics: Three Distinct Perspectives b 19

C013GXML 10/19/2012 20:48:53 Page 20

Applying the Method of Philosophical Ethics to Specific Scenarios To see how the philosophical ethics perspective of cyberethics can help us to analyze a cluster of moral issues affecting cybertechnology, we revisit the Washingtonienne, Twitter, and Meier scenarios introduced in the opening section of this chapter. In applying the philosophical ethics model to these scenarios, our first task is to identify one or more moral issues associated with each. We have already seen that these scenarios illustrate a wide range of ethical issues. For example, we saw that ethical issues associated with the Washingtonienne scenario include free speech, defamation, confidentiality, anonymity, and privacy with respect to blogs and blogging. But what kinds of policy vacuums and conceptual muddles, if any, arise in this case? For one thing, both the nature of a blog and the practices surrounding blogging are relatively new. Thus, not surprisingly, we have very few clear and explicit policies affecting the “blogosphere.” To consider why this is so, we begin by asking: what, exactly, is a blog? For example, is it similar to a newspaper or a periodical (such as theNew York Times or TIME Magazine), which is held to standards of accuracy and truth? Or is a blog more like a tabloid (such as the National Inquirer), in which case there is little expectation of accuracy? Our answers to these questions may determine whether bloggers should be permitted to post anything they wish, without concern for the accuracy of any of their remarks that may be false or defamatory, or both? At present, there are no clear answers to these and related questions surrounding blogs. So, it would seem that explicit policies are needed to regulate blogs and bloggers. But it would also seem that before we can frame explicit policies for blogging, we first need to resolve some important conceptual muddles.17

Next, consider the Twitter scenario. Among the ethical issues identified in that scenario were concerns affecting intellectual property. We can now ask whether any policy vacuums and conceptual muddles were generated in that scenario. The answer would clearly seem to be “yes.” However, policy vacuums concerning intellectual pro- perty in the digital era are by no means new. For example, we noted earlier that the original Napster scenario introduced controversies with respect to sharing copyrighted information, in the form of proprietary MP3 files, online. The Twitter scenario, however, introduces issues that go beyond that kind of concern. Here, we have a dispute about who owns a list of names that can constitute both a company’s customer list and a blogger’s group of followers on a private SNS account. Should one party have exclusive ownership of this list of names? Can a list of names qualify as a “trade secret” for a corporation, as companies such as PhonedogMedia claim? If a blogger generates a list of followers while blogging for an employer, should that employer have exclusive rights to the list of names?

Finally, consider the Meier scenario. Did MySpace have a clear policy in place regarding the creation of user accounts on its forum when Lori Drew set up the “Josh Evans” account? Do/Should SNS users have an expectation of anonymity or pseudo- nomity when they set up an account? Should SNSs, such as MySpace and Facebook, bear some moral responsibility, or at least some legal liability, for harm that is caused to users of its service, especially when users of that SNS have set up accounts with fictitious names and profiles? It would appear that the Meier scenario illustrates how a clear and significant policy vacuum arose in the case of the rules governing acceptable behavior on SNSs. Fortunately, many SNSs now, following the tragic incident involving Meier, have clear and explicit policies that require one to disclose his or her true identity to the SNS before setting up an account on its forum.

20 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:53 Page 21

1.4.3 Perspective #3: Cyberethics as a Field of Sociological/Descriptive Ethics

The two perspectives on cyberethics that we have examined thus far—professional ethics and philosophical ethics—can both be understood as normative inquiries into applied ethics issues. Normative inquiries or studies, which focus on evaluating and prescribing moral systems, can be contrasted with descriptive inquiries or studies. Descriptive ethics is, or aims to be, nonevaluative in approach; typically, it describes particular moral systems and sometimes also reports how members of various groups and cultures view particular moral issues. This kind of analysis of ethical and social issues is often used by sociologists and social scientists; hence, our use of the expression “sociological/descriptive perspective” to analyze this methodological framework.

Descriptive vs. Normative Inquiries Whereas descriptive investigations provide us with information about what is the case, normative inquiries evaluate situations from the vantage point of questions having to do with what ought to be the case. Those who approach cyberethics from the perspective of descriptive ethics often describe sociological aspects of a particular moral issue, such as the social impact of a specific technology on a particular community or social group. For example, one way of analyzing moral issues surrounding the “digital divide” (examined in Chapter 10) is first to describe the problem in terms of its impact on various socio- demographic groups involving social class, race, and gender. We can investigate whether, in fact, fewer poor people, non-whites, and women have access to cybertechnology than wealthy and middle-class persons, whites, and men. In this case, the investigation is one that is basically descriptive in character. If we were then to inquire whether the lack of access to technology for some groups relative to others was unfair, we would be engaging in a normative inquiry. For example, a normative investigation of this issue would question whether certain groups should have more access to cybertechnology than they currently have. The following scenario illustrates an approach to a particular cyberethics issue via the perspective of sociological/descriptive ethics.

Does the decision to implement Technology X pose a normative ethical problem for the AEC Corporation, as well as for Pleasantville? If we analyze the impact that Technology X has with respect to the number of jobs that are gained or lost, our investigation is essentially descriptive in nature. In reporting this phenomenon, we are simply describing or stating what is/ is not at issue in this case. If, however, we argue that AEC either should or should not implement this new technology, then we make a claim that is normative (i.e., a claim about what ought/ought not to be the case). For example, one might argue that the new technology should not be implemented because it would displace workers and thus possibly violate certain contractual obligations that may exist between AEC and its employees. Alternatively, one might argue that

c SCENARIO 1–6: The Impact of Technology X on the Pleasantville Community

AECCorporation, a company that employs 8,000 workers in Pleasantville, has decided to purchase and implement a new kind of computer/information technology, Technology X. The implementa- tion of Technology X will likely have a significant impact for AEC’s employees in particular, as well as for Pleasantville in general. It is estimated that 3,000 jobs at AEC will be eliminated when the new technology is implemented during the next six months. &

1.4 Cyberethics as a Branch of Applied Ethics: Three Distinct Perspectives b 21

C013GXML 10/19/2012 20:48:53 Page 22

implementing Technology X would be acceptable provided that certain factors are taken into consideration in determining which workers would lose their jobs. For example, suppose that in the process of eliminating jobs, older workers and minority employees would stand to be disproportionately affected. In this case, critics might argue that a fairer system should be used.

Our initial account of the impact of Technology X’s implementation for Pleasantville simply reported some descriptive information about the number of jobs that would likely be lost by employees at AEC Corporation, which has sociological implications. As our analysis of this scenario continued, however, we did much more than merely describe what the impact was; we also evaluated the impact for AEC’s employees in terms of what we believed ought to have been done. In doing so, we shifted from an analysis based on claims that were merely descriptive to an analysis in which some claims were also normative.

Some Benefits of Using the Sociological/Descriptive Approach to Analyze Cyberethics Issues Why is the examination of cyberethics issues from the sociological/descriptive ethics perspective useful? Huff and Finholt (1994) suggest that focusing on descriptive aspects of social issues can help us to better understand many of the normative features and implications. In other words, when we understand the descriptive features of the social effects of a particular technology, the normative ethical questions become clearer. So, Huff and Finholt believe that analyzing the social impact of cybertechnology from a sociological/descriptive perspective can better prepare us for our subsequent analysis of practical ethical issues affecting our system of policies and laws.

We have already noted that virtually all of our social institutions, from work to education to government to finance, have been affected by cybertechnology. This technology has also had significant impacts on different sociodemographic sectors and segments of our population. The descriptive information that we gather about these groups can provide important information that, in turn, can inform legislators and policy makers who are drafting and revising laws in response to the effects of cybertechnology.

From the perspective of sociological/descriptive ethics, we can also better examine the impact that cybertechnology has on our understanding of concepts such as community and individuality.We can ask, for instance, whether certain developments in social networking technologies used in Twitter and Facebook have affected the way that we conceive traditional notions such as “community” and “neighbor.” Is a community essentially a group of individuals with similar interests, or perhaps a similar ideology, irrespective of geographical limitations? Is national identity something that is, or may soon become, anachronistic? While these kinds of questions and issues in and of themselves are more correctly conceivedas descriptive rather thannormative concerns, they canhave significant normative implications for our moral and legal systems as well. Much more will be said about the relationship between descriptive and normative approaches to analyzing ethical issues in Chapters 10 and 11, where we examine the impact of cybertechnology on sociodemographic groups and on some of our social and political institutions.

Applying the Sociological/Descriptive Ethics Approach to Specific Scenarios Consider how someone approaching cyberethics issues from the perspective of socio- logical/descriptive ethics might analyze the Washingtonienne and Meier scenarios,

22 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:53 Page 23

described above. In the Washingtonienne case, the focus might be on gathering socio- demographic and socioeconomic data pertaining to the kinds of individuals who are likely to view and interact in blogs. For example, some social scientists might consider the income and educational levels of bloggers, as compared to individuals who engage in alternative kinds of online activities or who do not use the Internet at all. Others might be interested in determining which kinds of users view their own blogs and online postings as simply an online outlet for the kind of personal information that traditionally was included only in one’s (physical) diary. (Jessica Cutler’s behavior seemed to fit this category.) Social and behavioral scientists might further inquire into why some individ- uals seem to display little-to-no concern about posting intimate details of their romantic and sexual encounters to online forums that could be read, potentially at least, by millions of people. They might also question why some bloggers (as well as ordinary users of SNSs such as Facebook and Twitter) are so eager to post personal information, including information about their location (at a given point in time) and about their recreational interests, to online forums, in an era when that kind of information is so easily tracked and recorded by individuals other than those for whom it is intended.

Next consider the Meier scenario with respect to how it might be analyzed by someone doing research from the point of view of sociological/descriptive ethics. For example, a researcher might inquire into whether there has been an increase in the number of bullying incidents. And if the answer to this question is “yes,” the researcher might next question whether such an increase is linked to the widespread availability of cybertechnology. Also, the researcher might consider whether certain groups in the population are now more at risk than others with respect to being bullied in cyberspace. The researcher could inquire whether there are any statistical patterns to suggest that late-adolescent/early-teenage females are more likely to be bullied via cybertechnology than are individuals in other groups. The researcher could also ask if women in general are typically more vulnerable than men to the kinds of harassment associated with cyberbullying.

Also, a researcher approaching the Meier scenario from the sociological/descriptive ethics perspective might set out to determine whether an individual who never would have thought of physically bullying a victim in geographical space might now be inclined to engage in cyberbullying—perhaps because of the relative ease of doing so with cybertechnology? Or is it the case that some of those same individuals might now be tempted to do so because they believe that they will not likely get caught? Also, has the fact that a potential cyberbully realizes that he or she can harass a victim on the Internet under the cloak of relative anonymity/pseudonymity contributed to the increase in bullying-related activities online? These are a few of the kinds of questions that could be examined from the sociological/descriptive perspective of cyberethics.

Table 1.2 summarizes some key characteristics that differentiate the three main perspectives for approaching cyberethics issues.

In Chapters 4–12, we examine specific cyberethics questions from the vantage points of our three perspectives. Issues considered from the perspective of professional ethics are examined in Chapters 4 and 12. Cyberethics issues considered from the perspective of philosophical ethics, such as those involving privacy, security, and intellectual property and free speech, are examined in Chapters 5–9. And several of the issues considered in Chapters 10 and 11 are examined from the perspective of sociological/descriptive ethics.

1.4 Cyberethics as a Branch of Applied Ethics: Three Distinct Perspectives b 23

C013GXML 10/19/2012 20:48:53 Page 24

c 1.5 A COMPREHENSIVE CYBERETHICS METHODOLOGY

The three different perspectives of cyberethics described in the preceding section might suggest that three different kinds of methodologies are needed to analyze the range of issues examined in this textbook. The goal of this section, however, is to show that a single, comprehensive method can be constructed, and that this method will be adequate in guiding us in our analysis of cyberethics issues.

Recall the standard model used in applied ethics, which we briefly examined in Section 1.4.2. There we saw that the standard model includes three stages, i.e., where a researcher must (1) identify an ethical problem, (2) describe and analyze the problem in conceptual and factual terms, and (3) apply ethical theories and principles in the deliberative process. We also saw that Moor argued that the conventional model was not adequate for an analysis of at least some cyberethics issues. Moor believed that additional steps, which address concerns affecting “policy vacuums” and “conceptual muddles,” are sometimes needed before we can move from the second to the third stage of the methodological scheme. We must now consider whether the standard model, with Moor’s additional steps included, is complete. Brey (2004) suggests that it is not.

Brey believes that while the (revised) standard model might work well in many fields of applied ethics, such as medical ethics, business ethics, and bioethics, it does not always fare well in cyberethics. Brey argues that the standard method, when used to identify ethical aspects of cybertechnology, tends to focus almost exclusively on the uses of that technology. As such, the standard method fails to pay sufficient attention to certain features that may be embedded in the technology itself, such as design features that may also have moral implications.

Wemight be inclined to assume that technology itself is neutral and that only the uses to which a particular technology is put are morally controversial. However, Brey and others believe that it is a mistake to conceive of technology, independent of its uses, as something that is value-free, or unbiased. Instead, they argue, moral values are often embedded or implicit in features built into technologies at the design stage. For example, critics, including some feminists, have pointed out that in the past the ergonomic systems

TABLE 1.2 Summary of Cyberethics Perspectives

Type of Perspective Associated Disciplines Issues Examined

Professional Computer Science Professional responsibility Engineering System reliability/safety Library/Information Science

Codes of conduct

Philosophical Philosophy Privacy and anonymity Law Intellectual property

Free speech Sociological/Descriptive Sociology/Behavioral

Sciences Impact of cybertechnology on governmental/financial/educational institutions and sociodemographic groups

24 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:53 Page 25

designed for drivers of automobiles were biased toward men and gave virtually no consideration to women. That is, considerations having to do with the average height and typical body dimensions of men were implicitly built into the design specification. These critics also note that decisions about how the ergonomic systems would be designed were all made by men, which likely accounts for the bias embedded in that particular technological system.

1.5.1 A “Disclosive”Method for Cyberethics

As noted earlier, Brey believes that the standard, or what he calls “mainstream,” applied ethics methodology is not always adequate for identifying moral issues involving cybertechnology. Brey worries that using the standard model we might fail to notice certain features embedded in the design of cybertechnology. He also worries about the standard method of applied ethics because it tends to focus on known moral controver- sies, and because it fails to identify certain practices involving the use of cybertechnology that have moral import but that are not yet known. Brey refers to such practices as having “morally opaque” (or morally nontransparent) features, which he contrasts with “morally transparent” features.

According to Brey, morally controversial features that are transparent tend to be easily recognized as morally problematic. For example, many people are aware that the practice of placing closed circuit video surveillance cameras in undisclosed locations is controversial from a moral point of view. Brey notes that it is, however, generally much more difficult to discern morally opaque features in technology. These features can be morally opaque for one of two reasons: either they are unknown, or they are known but perceived to be morally neutral.18

Consider an example of each type of morally opaque (or morally nontransparent) feature. Computerized practices involving data mining (defined in Chapter 5) would be unknown to those who have never heard of the concept of data mining and who are unfamiliar with data mining technology. However, this technology should not be assumed to be morally neutral merely because data mining techniques are unknown to non- technical people, including some ethicists as well. Even if such techniques are opaque to many users, data mining practices raise certain moral issues pertaining to personal privacy.

Next consider an example of a morally opaque feature in which a technology is well known.Most Internet users are familiar with search engine technology.What users might fail to recognize, however, is that certain uses of search engines can be morally controversial with respect to personal privacy. Consequently, one of the features of search engine technology can be morally controversial in a sense that it is not obvious or transparent to many people, including those who are very familiar with and who use search engine technology. So, while a well-known technology, such as search engine programs, might appear to be morally neutral, a closer analysis of practices involving this technology will disclose that it has moral implications.

Figure 1.1 illustrates some differences between morally opaque and morally trans- parent features.

Brey argues that an adequate methodology for computer ethics must first identify, or “disclose,” features that, without proper probing and analysis, would go unnoticed as having moral implications. Thus, an extremely important first step in Brey’s “disclosive

1.5 A Comprehensive Cyberethics Methodology b 25

C013GXML 10/19/2012 20:48:53 Page 26

method” is to reveal moral values embedded in the various features and practices associated with cybertechnology itself.

1.5.2 An Interdisciplinary and Multilevel Method for Analyzing Cyberethics Issues

Brey’s disclosive model is interdisciplinary because it requires that computer scientists, philosophers, and social scientists collaborate. It is also multilevel because conducting computer ethics research requires three levels of analysis:

� disclosure level � theoretical level � application level First of all, the moral values embedded in the design of computer systems must be

disclosed. To do this, we need computer scientists because they understand computer technology much better than philosophers and social scientists do. However, social scientists are also needed to evaluate system design andmake it more user-friendly. Then philosophers can determine whether existing ethical theories are adequate to test the newly disclosed moral issues or whether more theory is needed. Finally, computer scientists, philosophers, and social scientists must cooperate by applying ethical theory in deliberations aboutmoral issues.19 In Chapter 2, we examine a range of ethical theories that can be used.

In the deliberations involved in applying ethical theory to a particular moral problem, one remaining methodological step also needs to be resolved. Jeroen van den Hoven (2000) has noted that methodological schemes must also address the “problem of justification of moral judgments.” For our purposes, we use the strategies of logical analysis included in Chapter 3 to justify the moral theories we apply to particular issues.

Morally transparent features Morally opaque (nontransparent) features

Known features Unknown features

Users are aware of these features but do not realize they have moral implications

(e.g., search engine tools).

Users are not aware of the technological features that have moral implications

(e.g., data mining tools).

Figure 1.1 Embedded technological features having moral implications.

26 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:53 Page 27

Table 1.3 describes the academic disciplines and the corresponding tasks and functions involved in Brey’s disclosive model.

It is in the interdisciplinary spirit of the disclosive methodology proposed by Brey that we will examine the range of cyberethics issues described in Chapters 4–12.

c 1.6 A COMPREHENSIVE STRATEGY FOR APPROACHING CYBERETHICS ISSUES

The following methodological scheme, which expands on the original three-step scheme introduced in Section 1.4.2, is intended as a strategy to assist you in identifying and analyzing the specific cyberethics issues examined in this book. Note, however, that this procedure is not intended as a precise algorithm for resolving those issues in some definitive manner. Rather, its purpose is to guide you in the identification, analysis, and deliberation processes by summarizing key points that we have examined in Chapter 1.

Step 1. Identify a practice involving cybertechnology, or a feature of that technology, that is controversial from a moral perspective.

1a. Disclose any hidden or opaque features.

1b. Assess any descriptive components of the ethical issue via the sociological implications it has for relevant social institutions and sociodemographic groups.

1c. In analyzing the normative elements of that issue, determine whether there are any specific guidelines, i.e., social policies or ethical codes, that can help resolve the issue (for example, see the relevant professional codes of conduct described in Chapter 4 and Appendixes A–E).

1d. If the normative ethical issue cannot be resolved through the application of existing policies, codes of conduct, etc., go to Step 2.

Step 2. Analyze the ethical issue by clarifying concepts and situating it in a context.

2a. If a policy vacuums exists, go to Step 2b; otherwise, go to Step 3.

2b. Clear up any conceptual muddles involving the policy vacuum and go to Step 3.

TABLE 1.3 Brey’s Disclosive Model

Level Disciplines Involved Task/Function

Disclosure Computer Science Social Science (optional)

Disclose embedded features in computer technology that have moral import

Theoretical Philosophy Test newly disclosed features against standard ethical theories

Application Computer Science Philosophy Social Science

Apply standard or newly revised/formulated ethical theories to the issues

1.6 A Comprehensive Strategy for Approaching Cyberethics Issues b 27

C013GXML 10/19/2012 20:48:53 Page 28

Step 3. Deliberate on the ethical issue. The deliberation process requires two stages.

3a. Apply one or more ethical theories (see Chapter 2) to the analysis of the moral issue, and then go to Step 3b.

3b. Justify the position you reached by evaluating it via the standards and criteria for successful logic argumentation (see Chapter 3).

Note that you are now in a position to carry out much of the work required in the first two steps of this methodological scheme. In order to satisfy the requirements in Step 1d, a step that is required in cases involving professional ethics issues, you will need to consult the relevant sections of Chapter 4. Upon completing Chapter 2, you will be able to execute Step 3a; and after completing Chapter 3, you will be able to satisfy the requirements for Step 3b.

c 1.7 CHAPTER SUMMARY

In this introductory chapter, we defined several key terms, including cyberethics and cybertechnology, used throughout this textbook. We also briefly described four evolu- tionary phases of cyberethics, from its origins as a loosely configured and informal field concerned with ethical and social issues involving stand-alone (mainframe) computers to a more fully developed field that is today concerned with ethical aspects of ubiquitous, networked computers. We then briefly considered whether any cyberethics issues are unique or special in a nontrivial sense. We next examined three different perspectives on cyberethics, showing how computer scientists, philosophers, and social scientists each tend to view the field and approach the issues that comprise it. Within that discussion, we also examined some ways in which embedded values and biases affecting cybertechnology can be disclosed and thus made explicit. Finally, we intro- duced a comprehensive methodological scheme that incorporates the expertise of computer scientists, philosophers, and social scientists who work in the field of cyberethics.

c REVIEWQUESTIONS

1. What, exactly, is cyberethics? How is it different from and similar to computer ethics, information ethics, and Internet ethics?

2. What is meant by the term cybertechnology? How is it similar to and different from computer technology?

3. Identify and briefly describe some key aspects of each of the “four phases” in the evolution of cyberethics as a field of applied ethics.

4. Why does Walter Maner believe that at least some cyberethics issues are unique? What arguments does he provide to support his view?

5. Why is it important to distinguish between unique technological features and unique ethical issues

when evaluating the question, Are cyberethics issues unique?

6. What alternative strategy does James Moor use to analyze the question whether cyberethics issues are unique ethical issues?

7. Why does Moor believe that cybertechnology poses special problems for identifying and analyzing ethical issues?

8. Explain what Moor means by the expression “log- ical malleability,” and why he believes that this technological feature of computers is significant.

9. What does Moor mean by the phrase “policy vacuum,” and what role do these vacuums play in understanding cyberethics?

28 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

C013GXML 10/19/2012 20:48:53 Page 29

10. Explain what Moor means by a “conceptual mud- dle”? How can these muddles sometimes compli- catematterswhen trying to resolvepolicy vacuums?

"11. What is applied ethics, and how is it different from theoretical ethics?

"12. Summarize the principal aspects of the perspective of cyberethics as a field of professional ethics.

"13. Describe the principal aspects of the perspective of cyberethics as a field of philosophical ethics.

"14. Summarize the key elements of the perspective of cyberethics as a field of sociological/descriptive ethics.

"15. Describe the kinds of criteria used to distinguish normative ethical inquiries from those that are essentially descriptive.

"16. What are the three elements of the standard, or “mainstream,” method for conducting applied ethics research?

"17. How is Philip Brey’s “disclosive method” of com- puter ethics different from what Brey calls “main- stream computer ethics”?

"18. What does Brey mean by “morally opaque” or “morally nontransparent” features embedded in computer technology?

"19. In which ways is Brey’s disclosive method “multilevel”? Briefly describe each level in his methodology.

"20. In which ways is that method also “multidiscipli- nary” or interdisciplinary? Which disciplines does it take into consideration?

c DISCUSSION QUESTIONS

"21. List and critically analyze some ethical concerns that arise in the Megan Meier cyberbullying inci- dent on MySpace, which resulted in Meier’s sui- cide. Should SNSs allow users to create accounts with fake identities?

"22. Describe and critically evaluate some ethical/ policy issues that arise in the scenario involving the dispute between Noah Kravitz and PhoneDog Media regarding the ownership of a Twitter account. Was PhoneDog simply protecting what it believed to be its intellectual property interests, or did the company go too far in this case? Explain.

"23. Identify and critically analyze some ethical issues that arise in the “Washingtonienne” scenario. Should Jessica Cutler’s anonymity have been pro- tected, and should the contents of her online diary have been protected from their subsequent publi- cation in Wonkette? Explain.

"24. Assess Don Gotterbarn’s arguments for the claim that computer ethics is, at bottom, a field whose primary concern should focus on moral- responsibility issues for computer professionals. Do you agree with his position?

c ESSAY/PRESENTATION QUESTIONS

"25. Think of a controversial issue or practice involving cybertechnology that has not yet been identified as an ethical issue, but which might eventually be recognized as one that has moral implications. Apply Brey’s “disclosive method” to see whether you can isolate any embedded values or biases affecting that practice. Next apply the “compre- hensive strategy” for approaching cyberethics that we examined in Section 1.6.

"26. We identified three main perspectives from which cyberethics issues can be examined. Can you think of any additional perspectives from which cyber- ethics issues might also be analyzed? In addition to the Washingtonienne, Twitter, and Meier scenar- ios that we examined, can you think of other recent cases involving cyberethics issues that would ben- efit from being analyzed from all three perspec- tives considered in Chapter 1?

Scenarios for Analysis

1. We briefly considered the question whether some cyberethics issues are new or unique ethical issues. In the following scenario,

(a) identify the ethical issues that arise and (b) determine whether any of them are unique to cybertechnology. In which ways are the

Essay/Presentation Questions b 29

C013GXML 10/19/2012 20:48:53 Page 30

c ENDNOTES

1. See “Parents: Cyber Bullying Led to Teen’s Suicide,” ABC News, Nov. 17, 2007. Available at http://abcnews. go.com/GMA/Story?id¼3882520.

2. See J. Biggs, “A Dispute Over Who Owns a Twitter Account Goes to Court.” New York Times, Dec. 25, 2011. Available at http://www.nytimes.com/2011/12/26/ technology/lawsuit-may-determine-who-owns-a-twitter- account.html?_r¼3.

3. See Richard Leiby, “The Hill’s Sex Diarist Reveals All (Well Some),” The Washington Post, May 23, 2004, p. D03. Available at http://www.washingtonpost.com/ wp-dyn/articles/A48909-2004May22.html.

4. Some have used a combination of these two expressions. For example, Ess (2009) uses “information and com- puter ethics” (ICE) to refer to ethical issues affecting “digital media.” And, Capurro (2007) uses the expres- sion “Intercultural Information Ethics” (IIE).

5. Floridi (2007, p. 63) contrasts Information Ethics (IE) with computer ethics (CE), by noting that the former is the “philosophical foundational counterpart of CE.”

6. Anderson and Anderson (2011) also use the term “machine ethics” to refer to this new field, which they

describe as one “concerned with giving machines ethical principles.” They contrast the development of ethics for people who usemachines with the development of ethics for machines. Others, however, such as Lin, Abney, and Bekey (2012), use the expression “robot ethics” to describe this emerging field.

7. See the interview conducted with Paul Ceruzzi in the BBC/PBS video series, The Machine That Changed the World (1990).

8. For example, Bynum (2008) notes that Norbert Weiner, in his writings on cybernetics in the late 1940s, antici- pated some of these concerns.

9. My analysis of the four phases in this section is adapted from and expands on Tavani (2001). Note that what I am calling a “technological phase” is not to be con- fused with something as precise as the expression “computer generation,” which is often used to describe specific stages in the evolution of computer hardware systems.

10. Maner (2004, p. 41) argues that computers have gener- ated “entirely new ethical issues, unique to computing, that do not surface in other areas.”

ethical issues in this scenario both similar to, and different from those in the Megan Meier incident involving cyberbullying, which we analyzed earlier in this chapter (Scenario 1–1)?

In October 1999, twenty-year-old Amy Boyer was murdered by a young man who had stalked her via the Internet. The stalker, Liam Youens, was able to carry out most of the stalking activities that eventually led to Boyer’s death by using a variety of tools and resources generally available to any Internet user. Via standard online search facilities, for example, Youens was able to gather personal information about Boyer. And after paying a small fee to Docusearch.com, an online information com- pany, Youens was able to find out where Boyer lived, where she worked, and so forth. Youens was also able to use another kind of online tool, available to Internet users, to construct twoWeb sites, both dedicated to his intended victim. On one site, he posted personal information about Boyer as well as a photograph of her; on the other Web site, Youens described, in explicit detail, his plans to murder Boyer.20

2. Identify and evaluate the ethical issues that arise in the following scenario. In which ways are the

ethical issues in this scenario similar to, and different from, those in the incident involving Twitter and Phonedog Media (Scenario 1–2), which we analyzed earlier in this chapter?

In January 2003, a United States district court in the District of Columbia ruled that Verizon (an Internet service provider or ISP) must comply with a subpoena by the RIAA— an organization that represents the interests of the recording industry. The RIAA, in an effort to stop the unauthorized sharing of music online, requested from Verizon the names of two of its subscribers who allegedly made available more than 600 copyrighted music files on the Internet. Although many ISPs, such as Comcast, and many universities complied with similar subpoenas issued on behalf of the RIAA, Verizon refused to release the names of any of its subscribers. Verizon argued that doing so would violate the privacy rights of its subscribers and would violate specific articles of the U.S. Constitution. So, Verizon appealed the district court’s decision. On December 19, 2003, the United States Court of Appeals for the District of Columbia overturned the lower court’s decision, ruling in favor of Verizon.21

30 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

http://www.abcnews.go.com/GMA/Story?id=3882520
http://www.abcnews.go.com/GMA/Story?id=3882520
http://www.nytimes.com/2011/12/26/technology/lawsuit-may-determine-who-owns-a-twitteraccount.html?_r=3
http://www.nytimes.com/2011/12/26/technology/lawsuit-may-determine-who-owns-a-twitteraccount.html?_r=3
http://www.nytimes.com/2011/12/26/technology/lawsuit-may-determine-who-owns-a-twitteraccount.html?_r=3
http://www.washingtonpost.com/wp-dyn/articles/A48909-2004May22.html
http://www.washingtonpost.com/wp-dyn/articles/A48909-2004May22.html
http://www.Docusearch.com
C013GXML 10/19/2012 20:48:54 Page 31

11. My analysis of the uniqueness debate here is adapted from Tavani (2002a).

12. Moor (2004), p. 107. 13. My scheme for analyzing computer-ethics issues from

these perspectives is adapted from Tavani (2002b). 14. Gotterbarn (1995), p. 21. 15. Moor (2007), p. 31. 16. Brey (2004), pp. 55–56. 17. See, for example, Grodzinsky and Tavani (2010) for an

analysis of this case in terms of these issues.

18. For more details on this distinction, see Brey (2004), pp. 56–57.

19. See Brey, pp. 64–65. 20. See A. J. Hitchcock, “Cyberstalking and Law Enforce-

ment: Keeping Up With the Web,” Link-UP, July/ August 2000. Available at http://computeme.tripod. com/cyberstalk.html. Also see Grodzinsky and Tavani (2004).

21. See, for example, Grodzinsky and Tavani (2005).

c REFERENCES

Anderson, Michael, and Susan Leigh Anderson, eds. 2011. Machine Ethics. New York: Cambridge University Press.

Barger, Robert N. 2008. Computer Ethics: A Case-Based Approach. New York: Cambridge University Press.

Brey, Philip. 2004. “Disclosive Computer Ethics.” In R. A. Spinello and H. T. Tavani, eds. Readings in CyberEthics. 2nd ed. Sudbury, MA: Jones and Bartlett Publishers, pp. 55–66. Reprinted from Computers and Society 30, no. 4 (2000): 10–16.

Brey, Philip. 2005. “Freedom and Privacy in Ambient Intelligence.” Ethics and Information Technology 7, no. 4: 157–66.

Biggs, John. 2011. “A Dispute Over Who Owns a Twitter AccountGoes toCourt.”NewYorkTimes,Dec.25.Available at http://www.nytimes.com/2011/12/26/tech nology/lawsuit- may-determine-who-owns-a-twitter-account.html?_r¼3.

Buchanan, Elizabeth A. 2004. “Ethical Considerations for the Information Professions.” In R. A. Spinello and H. T. Tavani, eds. Readings in CyberEthics. 2nd ed. Sudbury, MA: Jones and Bartlett Publishers, pp. 613–24.

Buchanan, Elizabeth A., and Kathrine A. Henderson. 2009. Case Studies in Library and Information Science Ethics. Jefferson, NC: McFarland.

Bynum,TerrellWard. 2008. “Milestones in theHistory of Infor- mation and Computer Ethics.” In K. E. Himma and H. T. Tavani, eds. The Handbook of Information and Computer Ethics. Hoboken, NJ: JohnWiley and Sons, pp. 25–48.

Capurro, Rafael. 2007. “Intercultural Information Ethics.” In R. Capurro, J. Fre€ubrauer, and T. Hausmanninger, eds. Localizing the Internet. Munich: Fink Verlag, pp. 21–38.

Ess, Charles. 2009.Digital Media Ethics. London, U.K.: Polity Press.

Floridi, Luciano. 2007. “Information Ethics: On the Philo- sophical Foundations of Computer Ethics.” In J. Weckert, ed.Computer Ethics. Aldershot, U.K.: Ashgate, pp. 63–82. Reprinted from Ethics and Information Technology 1, no. 1 (1999): pp. 37–56.

Gotterbarn, Don. 1995. “Computer Ethics: Responsibility Regained.” In D. G. Johnson and H. Nissenbaum, eds. Computing, Ethics, and Social Values. Upper Saddle River, NJ: Prentice Hall.

Grodzinsky, Francis S., and Herman T. Tavani. 2004. “Ethical Reflections on Cyberstalking.” In R. A. Spinello and H. T. Tavani, eds. Readings in CyberEthics. 2nd ed. Sudbury, MA: Jones and Bartlett Publishers, pp. 561–70.

Grodzinsky, Frances S., and Herman T. Tavani. 2005. “P2P Networks and the Verizon v. RIAA Case.” Ethics and Information Technology 7, no. 4: 243–50.

Grodzinsky, Frances S., andHermanT. Tavani. 2010. “Apply- ing the ‘Contextual Integrity’Model of Privacy to Personal Blogs in the Blogosphere.” International Journal of Inter- net Research Ethics 3, no. 1: 38–47.

Huff, Chuck, and Thomas Finholt, eds. 1994. Social Issues in Computing: Putting Computing in its Place. New York: McGraw-Hill.

Johnson, Deborah G. 2000. “The Future of Computer Ethics.” InG.Collste,ed.EthicsintheAgeofInformationTechnology. Link€oping, Sweden: Centre for Applied Ethics, pp. 17–31.

Johnson, Deborah G. 2010. Computer Ethics. 4th ed. Upper Saddle River, NJ: Prentice Hall.

Langford, Duncan. ed. 2000. Internet Ethics. New York: St. Martin’s Press.

Lin, Patrick, Keith Abney, and George A. Bekey, eds. 2012. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press.

Maner, Walter. 2004. “Unique Ethical Problems in Informa- tion Technology.” In T. W. Bynum and S. Rogerson, eds. Computer Ethics and Professional Responsibility. Malden, MA: Blackwell, pp. 39–59. Reprinted from Science and Engineering Ethics 2, no. 2 (1996): 137–54.

Moor, James H. 2001. “The Future of Computer Ethics: You Ain’t Seen Nothing Yet.” Ethics and Information Tech- nology 3, no. 2: 89–91.

Moor, James H. 2004. “Just Consequentialism and Comput- ing.” In R. A. Spinello and H. T. Tavani, eds. Readings in CyberEthics. 2nd ed. Sudbury, MA: Jones and Bartlett Publishers, pp. 407–17. Reprinted from Ethics and Infor- mation Technology 1, no. 1 (1999): 65–69.

Moor, James H. 2005. “ShouldWe Let Computers Get Under Our Skin?” In R. Cavalier ed.The Impact of the Internet on Our Moral Lives. Albany, NY: State University of New York Press, pp. 121–38.

References b 31

http://www.computeme.tripod.com/cyberstalk.html
http://www.computeme.tripod.com/cyberstalk.html
http://www.nytimes.com/2011/12/26/technology/lawsuitmay-determine-who-owns-a-twitter-account.html?_r=3
http://www.nytimes.com/2011/12/26/technology/lawsuitmay-determine-who-owns-a-twitter-account.html?_r=3
C013GXML 10/19/2012 20:48:54 Page 32

Moor, James H. 2007. “What Is Computer Ethics?” In J. Weckert, ed. Computer Ethics. Aldershot, UK: Ashgate, pp. 31–40. Reprinted from Metaphilosophy 16, no. 4 (1985): 266–75.

Tavani, Herman T. 2001. “The State of Computer Ethics as a Philosophical Field of Inquiry.” Ethics and Information Technology 3, no. 2: 97–108.

Tavani, Herman T. 2002a. “The Uniqueness Debate in Com- puter Ethics: What Exactly Is at Issue, and Why Does it Matter?” Ethics and Information Technology 4, no 1: 37–54.

Tavani, Herman T. 2002b. “Applying an Interdisciplinary Approach to Teaching Computer Ethics.” IEEE Technol- ogy and Society Magazine 21, no. 3: 32–38.

van den Hoven, Jeroen. 2000. “Computer Ethics and Moral Methodology.” In R. Baird, R. Ramsower, and S. Rose- nbaum, eds. Cyberethics: Social and Moral Issues in the Computer Age. Amherst, NY: Prometheus Books, pp. 80– 94.Reprinted fromMetaphilosophy28, no. 3 (1997): 234–48.

Wallach, Wendell, and Colin Allen. 2009. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press.

c FURTHER READINGS

Abelson, Hal, Ken Ledeen, and Harry Lewis. Blown to Bits: Your Life, Liberty and Happiness after the Digital Explo- sion. Upper Saddle River, NJ: Addison-Wesley, 2008.

Capurro, Rafael, and Michael Nagenborg, eds. Ethics and Robotics. Heidelberg, Germany: IOS Press, 2009.

De Palma, Paul, ed. Computers in Society 11/12. 14th ed. New York: McGraw-Hill, 2011.

Floridi, Luciano. ed. The Cambridge Handbook of Informa- tion and Computer Ethics. Cambridge, MA: MIT Press, 2010.

Mitcham, Carl. ed. Encyclopedia of Science, Technology, and Ethics. 4 Vols. New York: Macmillan, 2005.

Moor, James H. “Why We Need Better Ethics for Emerging Technologies.” In J.van den Hoven and J. Weckert, eds. Information Technology and Moral Philosophy. New York: Cambridge University Press, 2008, pp. 26–39.

van den Hoven, Jeroen. “Moral Methodology and Informa- tion Technology.” In K. E. Himma and H. T. Tavani, eds. The Handbook of Information and Computer Ethics. Hoboken, NJ: John Wiley and Sons, 2008, pp. 49–67.

c ONLINE RESOURCES

Association for Computing—Special Interest Group on Com- puters and Society. http://www.sigcas.org/.

Bibliography on Computing, Ethics, and Social Responsibility. http://cyberethics.cbi.msstate.edu/biblio/index.htm.

Computer Professionals for Social Responsibility (CPSR). http://cpsr.org/.

Heuristic Methods for Computer Ethics. http://csweb.cs.bgsu. edu/maner/heuristics/maner.pdf.

International Center for Information Ethics (ICIE). http://icie. zkm.de/.

International Society for Ethics and Information Technology. http://www4.uwm.edu/cipr/collaborations/inseit/.

Research Center for Computing and Society. http://www.south- ernct.edu/organizations/rccs/.

Stanford Encyclopedia of Philosophy. http://plato.stanford. edu/.

32 c Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks

http://www.icie.zkm.de/
http://www.icie.zkm.de/
http://www4.uwm.edu/cipr/collaborations/inseit/
http://www.southernct.edu/organizations/rccs/
http://www.southernct.edu/organizations/rccs/
http://www.plato.stanford.edu/
http://www.plato.stanford.edu/
http://www.sigcas.org/
http://www.cyberethics.cbi.msstate.edu/biblio/index.htm
http://www.cpsr.org/
http://www.csweb.cs.bgsu.edu/maner/heuristics/maner.pdf
http://www.csweb.cs.bgsu.edu/maner/heuristics/maner.pdf
C023GXML 10/19/2012 20:22:35 Page 33

c

C H A P T E R

2 Ethical Concepts and Ethical

Theories: Establishing and Justifying a Moral System

In Chapter 1, we defined cyberethics as the study of moral issues involving cybertech- nology. However, we have not yet defined what is meant by ethics,morality, and the study of moral issues. In Chapter 2, we define these terms as well as other foundational concepts, and we examine a set of ethical theories that will guide us in our deliberation on the specific cyberethics issues we confront in Chapters 4–12. To accomplish the objectives of Chapter 2, we provide answers to the following questions:

� What is ethics, and how is it different from morality or a moral system? � What are the elements that make up a moral system? � Where do the rules in a moral system come from, and how are they justified? � How is a philosophical study of morality different from studyingmorality from the

perspectives of religion and law? � Is morality essentially a personal, or private, matter, or is it a public phenomenon? � Is morality simply relative to particular cultures and thus culturally determined? � How is meaningful dialogue about cyberethics issues that are global in scope

possible in a world with diverse cultures and belief systems? � What roles do classic and contemporary ethical theories play in the analysis of

moral issues involving cybertechnology? � Are traditional ethical theories adequate to handle the wide range of moral

controversies affecting cybertechnology?

c 2.1 ETHICS ANDMORALITY

Ethics is derived from the Greek ethos, and the term morality has its roots in the Latin mores. Both the Greek and the Latin terms refer to notions of custom, habit, behavior, and character. Although “ethics” and “morality” are often used interchangeably in

33

C023GXML 10/19/2012 20:22:35 Page 34

everyday discourse, we draw some important distinctions between the two terms as we will use them in this textbook. First, we define ethics as the study of morality.1 This definition, of course, raises two further questions:

a. What is morality?

b. What is the study of morality?

We had begun to answer question (b) in Chapter 1, where we described three approaches to cyberethics issues. You may want to review Section 1.4, which describes how moral issues can be studied from the perspectives of professional ethics, philosophi- cal ethics, and sociological/descriptive ethics. We will say more about the study of morality from a philosophical perspective in Section 2.1.2. Before we examine the concepts and theories that comprise morality or a moral system, however, we briefly consider a classic example of a moral dilemma.

First, we should note that the phrase “moral dilemma” is often misused to describe a “moral issue.” We will see that not every moral issue is a moral dilemma, and not every dilemma is necessarily moral in nature. A dilemma describes a situation where one is confronted with two choices, neither of which is desirable. Sometimes it may mean choosing between (what one may perceive to be) the lesser of two evils. But our primary interest in this chapter is not somuch with the specific choices onemakes; instead it is with (i) the principle that one uses in making his or her choice, and (ii) whether that principle can be applied systematically and consistently in making moral decisions in similar kinds of cases. We next consider a dilemma that has become a classic in the ethics literature.

What would you do in this situation—let the trolley take its “natural” course, expecting that five people will likely die, or intentionally change the direction of the trolley, likely causing the death of one person who otherwise would have lived? If you use what some call a “cost-benefits” approach in this particular situation, you might reason in the following way: throwing the switch will have a better outcome, overall, because more human lives would be saved than lost. So, in this case you conclude that throwing the switch is the right thing to do because the net result is that four more people will live. If the reasoning process that you used in this particular case is extended to a general principle, you have embraced a type of consequentialist or utilitarian ethical theory (described later in this chapter). But can this principle/theory be consistently extended to cover similar cases?

c SCENARIO 2–1: The Runaway Trolley: A Classic Moral Dilemma

Imagine that you are driving a trolley and that all of a sudden you realize that the trolley’s brake system has failed. Further imagine that approximately 80 meters ahead of you on the trolley track (a short distance from the trolley’s station) five crew men are working on a section of the track on which your trolley is traveling. You realize that you cannot stop the trolley and that you will probably not be able to prevent the deaths of the five workers. But then you suddenly realize that you could “throw a switch” that would cause the trolley to go on to a different track. You also happen to notice that one person is working on that track. You then realize that if you do nothing, five people will likely die, whereas if you engage the switch to change tracks, only one person would likely die.2 &

34 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:35 Page 35

Next consider a variation of this dilemma, which also involves a runaway trolley, but this time you are a spectator. Imagine that you are standing on a bridge overlooking the track on which a runaway trolley is traveling. You observe that the trolley is heading for the station where there are many people gathered outside. Standing next to you on the bridge is a very large and obese person (weighing approximately 500 pounds), who is leaning forward over the rail of the bridge to view the runaway trolley. You realize that if you gently pushed the obese person forward as the trolley approaches, he would fall off the bridge and land in front of the trolley; the impact would be sufficient to stop the trolley. Thus you could save the lives of many people who otherwise would die.

Would you be willing to push the obese person off the bridge? If not, why not? What has changed in the two scenarios? After all, if you are reasoning from the standpoint of a utilitarian/consequentialist theory, the same outcome would be realized—one person dies, while many others live. But studies have shown that most people find it far more difficult to push (intentionally) one person to his death, even though doing so would mean that several persons will live as a result. However, in this case, you might reason that intentionally causing someone’s death (especially by having a “direct hand” in it) is morally wrong. You may also reason that actively and deliberately causing one person’s death (as opposed to another’s) is unjust and unfair, and that it would be a dangerous moral principle to generalize. In this case, your reasoning would be nonutilitarian or nonconsequentialist.

Perhaps you see the inconsistency in the means used to make decisions in the two similar scenarios. However, youmight react initially by saying that it is permissible to flip- flop on moral principles, depending on the particular circumstances you face. But we will see that it is difficult to have a coherent moral system where the ethical theories used to frame policies are inherently inconsistent. Fortunately, there is no need for us to resolve these questions at this point in the chapter. Rather, the purpose of posing this dilemma now is to get us to begin thinking about how we can respond to dilemmas that we will invariably face in our professional as well as personal lives. Later in this chapter, we revisit this dilemma and we complicate it somewhat by replacing the trolley’s human driver with an autonomous computer system. We then examine in detail some specific ethical theories that can be applied in our analyses of this and other moral dilemmas. First, however, we examine some basic concepts that comprise morality and a moral system.

2.1.1 What Is Morality?

As noted above, we defined ethics as the study of morality. However, there is no universally agreed upon definition of “morality” among ethicists and philosophers. For our purposes, however, morality can be defined as a system of rules for guiding human conduct, and principles for evaluating those rules. Note that (i) morality is a system, and (ii) it is a system comprised of moral rules and principles. Moral rules can be understood as rules of conduct, which are very similar to the notion of policies, described in Chapter 1. There, “policies” were defined as rules of conduct that have a wide range of application. According to James Moor (2004), policies range from formal laws to informal, implicit guidelines for actions.

2.1 Ethics and Morality b 35

C023GXML 10/19/2012 20:22:35 Page 36

There are two kinds of rules of conduct:

1. Directives that guide our conduct as individuals (at the microlevel)

2. Social policies framed at the macrolevel

Directives are rules that guide our individual actions and direct us in our moral choices at the “microethical” level (i.e., the level of individual behavior). “Do not steal” and “Do not harm others” are examples of directives. Other kinds of rules guide our conduct at the “macrolevel” (i.e., at the level of social policies and social norms).

Rules of conduct that operate at the macroethical level guide us in both framing and adhering to social policies. For example, rules such as “Proprietary software should not be duplicated without proper authorization,” or “Software that can be used to invade the privacy of users should not be developed,” are instances of social policies. Notice the correlation between the directive “Do not steal” (a rule of conduct at themicrolevel), and the social policy “Unauthorized duplication of software should not be allowed” (a rule of conduct at the macrolevel). In Section 2.1.2 we will see that both types of rules of conduct are derived from a set of “core values” in a moral system.

The rules of conduct in a moral system are evaluated against standards called principles. For example, the principle of social utility, which is concerned with promoting the greatest good for the greatest number, can be used as a “litmus test” for determining whether the policy “Proprietary software should not be copied without permission” can be justified on moral grounds. In this case, the policy in question could be justified by showing that not allowing the unauthorized copying of software will produce more overall social good than will a policy that permits software to be duplicated freely.

Similarly, the policy “Users should not have their privacy violated”might be justified by appealing to the same principle of social utility. Or a different principle such as “respect for persons,” or possibly a principle based on the notion of fairness, might be used to justify the social policy in question. Figure 2.1 illustrates the different kinds of rules and principles that comprise a moral system.

What Kind of a System Is a Moral System? According to Bernard Gert (2005, 2007), morality is a “system whose purpose is to prevent harm and evils.” In addition to preventing harm, a moral system aims at promoting human flourishing. Although there is some disagreement regarding the extent to which the promotion of human flourishing is required of a moral system, virtually all ethicists believe that, at a minimum, the fundamental purpose of a moral system is to prevent or alleviate harm and suffering.We have already seen that at the heart of a moral system are rules of conduct and principles of evaluation. We next consider some other characteristics that define a moral system.

Gert describes a moral system as one that is both public and informal. The system is public, he argues, because everyonemust knowwhat the rules are that define it. Gert uses the analogy of a game, which has a goal and a corresponding set of rules. The rules are understood by all of the players, and the players use the rules to guide their behavior in legitimately achieving the goal of the game. The players can also use the rules to evaluate or judge the behavior of other players in the game. However, there is one important difference between a moral system and a game: Not everyone is required to participate in a game, but we are all obligated to participate in a moral system.

36 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:35 Page 37

Morality is also informal because, Gert notes, a moral system has no formal authoritative judges presiding over it. Unlike games in professional sports that have rules enforced by referees in a manner that approaches a legal system, morality is less formal. A moral system is more like a game of cards or a “pickup game” in baseball or basketball. Here the players are aware of the rules, but even in the absence of a formal official or referee to enforce the game’s rules, players generally adhere to them.

Gert’s model of a moral system includes two additional features: rationality and impartiality. A moral system is rational in that it is based on principles of logical reason accessible to ordinary persons. Morality cannot involve special knowledge that can be understood only by privileged individuals or groups. The rules in a moral system must be available to all rational persons who, in turn, are (what ethicists call)moral agents, bound by the system of moral rules. We do not hold nonmoral agents (such as young children, mentally challenged persons, and pets) morally responsible for their own actions, but moral agents often have responsibilities to nonmoral agents. (We examine the concepts of “agency” and “moral agency” in detail in Chapter 12.)

A moral system is impartial in the sense that the moral rules are ideally designed to apply equitably to all participants in the system. In an ideal moral system, all rational persons are willing to accept the rules of the system, even if they do not know in advance what their particular place in that system will be. To ensure that impartiality will be built into a moral system, and that its members will be treated as fairly as possible,

Rules of Conduct (Action-guiding rules, in the form of either directives or social

policies)

Principles of Evaluation (Evaluative standards used to justify rules of conduct)

Two types

Rules for guiding the actions of individuals

(microlevel ethical rules)

Rules for establishing social policies

(macrolevel ethical rules)

Examples include principles such as social utility and

justice as fairness

Examples include directives, such as “Do not steal” and

“Do not harm others”

Examples include social policies, such as “Software should be protected” and

“Privacy should be respected”

Figure 2.1 Basic components of a moral system.

2.1 Ethics and Morality b 37

C023GXML 10/19/2012 20:22:35 Page 38

Gert invokes his “blindfold of justice” principle. Imagine that you are blindfolded while deciding what the rules of a moral system will be. Since you do not know in advance what position you will occupy in that system, it is in your own best interest to design a system in which everyone will be treated fairly. As an impartial observer who is also rational, you will want to ensure against the prospect of ending up in a group that is treated unfairly.3

Table 2.1 summarizes four key features in Gert’s model of a moral system.

2.1.2 Deriving and Justifying the Rules and Principles of a Moral System

So far, we have defined morality as a system that is public, informal, rational, and impartial. We have also seen that at the heart of a moral system are rules for guiding the conduct of the members of the system. But where, exactly, do these rules come from? And what criteria can be used to ground or justify these rules? Arguably, the rules of conduct involving individual directives and social policies are justified by the system’s evaluative standards, or principles. But how are those principles in turn justified?

On the one hand, rules of conduct for guiding action in the moral system, whether individual directives or social policies, are ultimately derived from certain core values. Principles for evaluating rules of conduct, on the other hand, are typically grounded in one of three systems or sources: religion, law, or (philosophical) ethics.

We next describe the core values in a society from which the rules of conduct are derived.

Core Values and Their Role in a Moral System The term value comes from the Latin valere, whichmeans having worth or being of worth. Values are objects of our desires or interests; examples include happiness, love, and freedom. Some philosophers suggest that the moral rules and principles comprising a society’s moral system are ultimately derived from that society’s framework of values.4

Philosophers often distinguish between two types of values, intrinsic and instrumen- tal. Any value that serves some further end or good is called an instrumental value because it is tied to some external standard. Automobiles, computers, and money are examples of goods that have instrumental value. Values such as life and happiness, on the other hand, are intrinsic because they are valued for their own sake. Later in this chapter, we will see that utilitarians argue that happiness is an intrinsic value. And in Chapter 5, we will see that some ethicists believe personal privacy is a value that has both intrinsic and instrumental attributes.

Another approach to cataloguing values is to distinguish core values, some of which may or may not also be intrinsic values, from other kinds of values. James Moor (2004), for example, believes that life, happiness, and autonomy are core values because they are

TABLE 2.1 Four Features of Gert’s Moral System

Public Informal Rational Impartial

The rules are known to all of the members.

The rules are informal, not like formal laws in a legal system.

The system is based on principles of logical reason accessible to all its members.

The system is not partial to any one group or individual.

38 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:35 Page 39

basic to a society’s thriving and perhaps even to its survival. Autonomy, Moor argues, is essentially a cluster of values that includes ability, security, knowledge, freedom, opportunity, and resources. Although core values might be basic to a society’s flourishing, and possibly to that society’s survival, it does not follow that each core value is also a moral value.

Sometimes descriptions of morals and values suggest that morals are identical to values. Values, however, can be either moral or nonmoral, and moral values need to be distinguished from the broader set of nonmoral values. Consider again the roles that rationality and impartiality play in a moral system. Rationality informs us that it is in our interest to promote values consistent with our own survival, happiness, and flourishing as individuals. When used to further only our own self-interests, these values are not necessarily moral values. Once we bring in the notion of impartiality, however, we begin to take themoral point of view.Whenwe frame the rules of conduct in amoral system, we articulate one or more core moral values, such as autonomy, fairness, and justice. For example, the rule of conduct “Treat people fairly” is derived from the moral value of impartiality. Figure 2.2 illustrates how the rules and principles that comprise a moral system are both derived from core values and justified on grounds that that tend to be either religious, legal, or philosophical in nature.

Three Approaches for Grounding the Principles in a Moral System We have seen how the rules of conduct in a moral system can be derived from a society’s core values. Now we will consider how the principles that are used to justify the rules of conduct are grounded. As we suggested in Section 2.1.2, the principles are grounded in one of three sources: religion, law, and philosophical ethics. We now consider how a particular moral principle can be justified from the vantage point of each scheme. As an illustration, we can use the rule of conduct “Do not steal,” since it underpins many cyberethics controversies involving software piracy and intellectual property. Virtually every moral system includes at least one rule that explicitly condemns stealing. But why,

Core Values

Systems for justifying moral principles

Source of moral rules

Moral principles and rules

Principles of Evaluation

Rules of Conduct

Religious System

Philosophical System

Legal System

Figure 2.2 Components of a moral system.

2.1 Ethics and Morality b 39

C023GXML 10/19/2012 20:22:35 Page 40

exactly, is stealingmorally wrong? This particular rule of conduct is evaluated against one or more principles such as “We should respect persons” or “We should not cause harm to others”; but how are these principles, in turn, justified? The answer depends on whether we take the religious, the legal, or the philosophical/ethical point of view.

Approach #1: Grounding Moral Principles in a Religious System Consider the follow- ing rationale for why stealing is morally wrong:

Stealing is wrong because it offends God or because it violates one of God’s Ten Commandments.

Here the “moral wrongness” in the act of stealing is grounded in religion; stealing, in the Judeo-Christian tradition, is explicitly forbidden by one of the Ten Commandments. From the point of view of these particular institutionalized religions, then, stealing is wrong because it offends God or because it violates the commands of a divine authority. Furthermore, Christians generally believe that those who steal will be punished in the next life even if they are not caught and punished for their sins in the present life.

One difficulty in applying this rationale in the United States is that American society is pluralistic. While the United States was once a relatively homogeneous culture with roots in the Judeo-Christian tradition, American culture has in recent years become increasingly heterogeneous. So people with different religious beliefs, or with no religious beliefs at all, can disagree with those whose moral beliefs are grounded solely on religious convictions that are Judeo-Christian based. Because of these differences, many argue that we need to ground the rules and principles of a moral system on criteria other than those provided by any particular organized religion. Some suggest that civil law can provide the foundation needed for a moral system to work.

Approach #2: Grounding Moral Principles in a Legal System An alternative rationale to the one proposed in the preceding section is as follows:

Stealing is wrong because it violates the law.

One advantage of using law instead of religion as the ground for determining why stealing is wrong is that it eliminates certain kinds of disputes between religious and nonreligious persons and groups. If stealing violates the law of a particular jurisdiction, then the act of stealing can be declared wrong independent of any religious beliefs or disbeliefs—Christian,Muslim, or even agnostic or atheist. And since legal enforcement of rules can be carried out independent of religious beliefs, there is a pragmatic advantage to grounding moral principles (and their corresponding rules) in law rather than in religion: those breaking a civil law can be punished, for example, by either a fine or imprisonment, or both.

But laws are not uniform across political boundaries: Laws vary from nation to nation and state to state within a given nation. In the United States, the unauthorized copying and distribution of proprietary software is explicitly illegal. However, in certain Asian countries, the practice of copying proprietary software is not considered criminal (or even if it is technically viewed as a crime, actual cases of piracy may not be criminally prosecuted). So there can be a diversity of legal systems just as there is a diversity of religious systems.

40 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:35 Page 41

Perhaps a more serious flaw in using a legal approach is that history has shown that certain laws, although widely accepted, institutionalized, and practiced within a society, have nonetheless been morally wrong. For example, slavery was legally valid in the United States until 1865. And in South Africa, apartheid was legally valid until 1991. So if we attempt to ground moral principles in law, we are still faced with serious challenges. Also, we can ask whether it is possible, or even desirable, to institutionalize morality such that we require specific laws for every possible moral issue?

Approach #3: Grounding Moral Principles in a Philosophical System of Ethics A third way to approach the problem of how to ground moral systems is to say:

Stealing is wrong because it is wrong.

Notice what this statement implies. The moral rightness or wrongness of stealing is not grounded in any external authority, theological or legal. So regardless of whether God condemns stealing or whether stealing violates existing civil laws, stealing is held to be wrong in itself. On what grounds can such a claim be made? Many philosophers and ethicists argue that reason alone is sufficient to show that stealing is wrong—reason informs us that there is something either in the very act of stealing or in the consequences of the act that makes stealing morally wrong.

In the case of both religion and law, sanctions in the form of punishments can be applied to deter individuals from stealing. In the first case, punishment for immoral behavior is relegated to the afterlife. And in the second case, punishment can be meted out here and now. In the case of philosophical ethics, sanctions take the form of social disapprobation (disapproval) and, possibly, social ostracism, but there is no punishment in a formal sense.

According to the system of philosophical ethics, stealing is morally wrong by criteria that reason alone is sufficient to determine. Of course, we need to specify what these criteria are; we will do this in Sections 2.4–2.7, where we discuss four kinds of ethical theories.

The Method of Philosophical Ethics: Logical Argumentation and Ethical Theory In Chapter 1, we briefly described the philosophical method and saw how it could be used to analyze cyberethics issues. We also saw that the method philosophers use to analyze moral issues is normative, in contrast to the descriptive method that is used by many social scientists. We saw that sociological and anthropological studies are descriptive because they describe or report how people in various cultures and groups behave with respect to the rules of a moral system. For example, a sociologist might report that people who live in nations along the Pacific Rim believe that it is morally permissible to make copies of proprietary software for personal use. However, it is one thing simply to report or describe what the members of a particular culture believe about a practice such as duplicating proprietary software, and it is something altogether different to say that people ought to be permitted to make copies of that proprietary material. When we inquire into moral issues from the latter perspective, we engage in a normative investigation.

We have seen that normative analyses of morality can involve religion and law as well as philosophy. We have also seen, however, that what separates philosophy from the other two perspectives of normative analysis is the methodology used to study the moral

2.1 Ethics and Morality b 41

C023GXML 10/19/2012 20:22:35 Page 42

issues. To approach these issues from the perspective of philosophical ethics is, in effect, to engage in a philosophical study of morality.

If you are taking a course in ethics for the first time, you might wonder what is meant by the phrase “philosophical study.” We have already described what is meant by a descriptive study, which is essentially a type of scientific study. Philosophical studies and scientific studies are similar in that they both require that a consistent methodological scheme be used to verify hypotheses and theories; and these verification schemes must satisfy the criteria of rationality and impartiality. But philosophical studies differ from scientific studies in one important respect: Whereas scientists typically conduct experiments in a laboratory to confirm or refute one ormore hypotheses, philosophers do not have a physical laboratory to test ethical theories and claims. Instead, philosophers confirm or reject the plausibility of the evidence for a certain claim or thesis via the rules of logical argumentation (which we will examine in Chapter 3); these rules are both rational and impartial. Another important feature that distinguishes a philosophical study of morality from other kinds of normative investigation into morality is the use of ethical theory in the analysis and deliberation of the issues.

Ethicists vs. Moralists We note that ethicists who study morality from the perspective of philosophical methodology, and who thus appeal to logical arguments to justify claims and positions involving morality, are very different from moralists. Moralists often claim to have all of the answers regardingmoral questions and issues.Manymoralists have been described as “preachy” and “judgmental.”And somemoralists may have a particular moral agenda to advance. Ethicists, on the other hand, use the philosophical method in analyzing and attempting to resolve moral issues; they must remain open to different sides of a dispute, and their primary focus is on the study of morality and the application of moral theories. As such, they approach moral issues and controversies by way of standards that are both rational (based on logic) and impartial (open to others to verify). We also examine some of these important distinctions in our analysis of key differences between moral absolutism and moral objectivism, later in this chapter.

c 2.2 DISCUSSION STOPPERS AS ROADBLOCKS TOMORAL DISCOURSE

We have suggested that impartial and objective standards, such as those provided by ethical theory and the rules of logical argumentation, can be used in our analysis of moral issues. However, many people might be surprised that tests and standards of any kind can be applied to disputes about morality and moral issues. So before beginning our examination of the ethical theory, perhaps we should first acknowledge and try to address some concerns that many people frequently encounter when either they willingly engage in, or find themselves involuntarily drawn into, discussions involving moral issues. We will see why these concerns are often based on some conceptual confusions about the nature of morality itself.

Have you ever been engaged in a serious conversation about a moral issue when, all of a sudden, one party in the discussion interjects with a remark to the effect, “But who’s to say what is right or wrong anyway?”Or perhaps someonemight interject, “Who are we

42 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:35 Page 43

to impose our values and ideas on others?” Such clich�es are just two examples of the kinds of simplistic or nonreflective questions that we are likely to hear in discussions involving moral issues. I call remarks of this type “discussion stoppers” because often they close down prematurely what otherwise might be a useful discussion. These stoppers can take many different forms, and some are more common than others, but we can analyze them in terms of four distinct questions:

1. People disagree about morality, so how can we reach an agreement on moral issues?

2. Who am I/who are we to judge others and to impose my/our values on them?

3. Isn’t morality simply a private matter?

4. Isn’t morality simply a matter that different cultures and groups should deter- mine for themselves?

2.2.1 Discussion Stopper #1: People Disagree on Solutions to Moral Issues

Because different people often have different beliefs as to the correct answer to many moral questions, some infer that there is no hope of reaching any kind of agreement on answers to any moral question. And from this inference, some conclude that any meaningful discourse about morality is impossible. Three crucial points that people who draw these and similar inferences about morality fail to recognize, however, are as follows:

I. Experts in other fields of study, such as science and mathematics, also disagree as to the correct answers to certain questions.

II. There is common agreement as to answers to some moral questions.

III. People do not always distinguish between disagreements about general princi- ples and disagreements about factual matters in disputes involving morality.

We briefly examine each of these points.

Experts in Many Fields Disagree on Fundamental Issues First, we should note that morality is not the only area in which intelligent people have disagreements. Scientists and mathematicians disagree among themselves about core issues in their disciplines, yet we do not dismiss the possibility of meaningful discourse in science and mathematics merely because there is some disagreement among experts in those fields. Consider also that computer scientists disagree among themselves whether open source code is better than proprietary code, whether Linux is a better operating system than Windows 7, or whether Cþþ is a better programming language than Java.

One example of how natural scientists can disagree among themselves is apparent in the classic and contemporary debate in physics regarding the nature of light. Some physicists argue that light is ultimately composed of particles, whereas others claim that light is essentially composed of waves. Because physicists can disagree with each other, should we conclude that physics itself must be a totally arbitrary enterprise? Or, alternatively, is it not possible that certain kinds of disagreements among scientists might indeed be healthy for science? The debate about the nature of light has actually contributed to moving the field of physics forward in ways that it otherwise might not

2.2 Discussion Stoppers as Roadblocks to Moral Discourse b 43

C023GXML 10/19/2012 20:22:35 Page 44

progress. In this sense, then, a certain level of disagreement and dispute among scientists is a positive and constructive function in the overall enterprise of scientific discovery. Similarly, why not assume that certain kinds of disagreements in ethics—that is, those that are based on points aimed at achieving constructive resolutions—actually contribute to progress in the field of ethics?

Also note that disagreement exists among contemporary mathematicians as to whether or not numbers are constructed (as opposed to having an independent exis- tence). Because mathematicians disagree about the truth of certain claims pertaining to foundational issues in mathematics, does it follow that the field of mathematics itself is arbitrary? Does it also follow that we should give up any hope of eventually reaching an agreement about basic truths in mathematics? And should we dismiss as arbitrary the theories of mathematics as well as the theories of physics, simply because there is some level of disagreement among scholars in both academic fields? Would it be reasonable to do so? If not, then why should one dismiss ethics merely because there is some disagreement among ethicists and among ordinary persons as to the correct answers to some moral issues?

Note that certain conditions (parameters, rules, etc.) must be satisfied in order for a particular claim or a particular theory to qualify as acceptable in debates among scientists and among mathematicians. We will see that certain rules and parameters must also be satisfied in order for a particular claim or theory to qualify as acceptable in debates among ethicists. Just as there are claims and theories in physics and in mathematics that are not considered plausible by the scientific and mathematical communities, similarly, not every claim or theory involving morality is considered reasonable by ethicists. Like mathematicians and scientists, ethicists continue to disagree with one another; for example, they will likely continue to debate about which ethical theories should be applied in the case of cloning and genomic research. But like scientists and mathemati- cians, ethicists will continue to work within the constraints of certain acceptable rules and parameters in advancing their various theories.

Common Agreement on Some Moral Issues We can now turn to our second point: People have demonstrated considerable agreement on answers to some moral questions, at least with respect to moral principles. We might be inclined to overlook the significant level of agreement regarding ethical principles, however, because, as Gert (2005, 2007) notes, we tend to associate moral issues with highly controversial concerns such as the death penalty, euthanasia, abortion, and cloning, all involving life and death decisions. We tend to forget that there are also many basic moral principles on which we do agree; for instance, nearly everyone believes that people should tell the truth, keep promises, respect their parents, and refrain from activities involving stealing and cheating. Andmost people agree that “Murder is wrong.” It would be prudent for us to pay closer attention to our beliefs regarding these core moral principles in order to find out why there is such agreement.

So if we agree onmany basic moral principles, such as our commonly held beliefs that murder is wrong and stealing is wrong, then why do many people also believe that disputes about moral issues are impossible to resolve? Beliefs and assumptions regarding morality may be based on certain conceptual confusions, and one source of confusion may be our failure to distinguish between the alleged factual matters and the general principles that constitute moral issues. This brings us to our third point.

44 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:35 Page 45

Disagreements about Principles vs. Disagreements about Facts Richard De George (1999) has pointed out that in analyzing moral issues we need to be very careful in distinguishing our disagreements about moral principles from our disagreements about certain facts, or empirical data, associated with a particular moral issue. For example, in the current debate over intellectual property rights in cyberspace, the dispute is not so much about whether we should accept the moral principle that stealing is wrong, for parties on both sides of the debate would acknowledge that stealing is indeed morally wrong. What they disagree about is whether an activity that involves either the unauthorized copying of proprietary software or the unauthorized exchange of proprietary information over a computer network is itself a form of stealing. In other words, the debate is not about a moral principle, but rather has to do with certain empirical matters, or factual claims.

Recall our discussion of the original Napster controversy in Chapter 1. It might turn out that this particular controversy is not a moral dispute but rather a debate over factual claims. And once the factual questions are resolved, the Napster controversy might be understood as one that is, at bottom, nonmoral in nature. Being able to recognize these distinctions will help us to eliminate some of the confusion surrounding issues that initially are perceived to be moral but ultimately may turn out to be nonmoral, or descriptive.

2.2.2 Discussion Stopper #2: Who Am I to Judge Others?

People are often uncomfortable with the prospect of having to evaluate the moral beliefs and practices of others. We generally feel that it is appropriate to describe the different moral beliefs that others have but that it is inappropriate to make judgments about the moral beliefs held by others. This assumption is problematic at two levels: First, as a matter of descriptive fact, we constantly judge others in the sense that we make certain evaluations about them. And second, from a normative perspective, in certain cases we should make judgments (evaluations) about the beliefs and actions of others. We briefly examine both points.

Persons Making Judgments vs. Persons Being Judgmental First, we need to make an important distinction between “making a judgment” about someone or something and “being a judgmental person.” Because someone makes a judgment, or evaluation, about X, it does not follow that he or she is also necessarily being a judgmental person. For example, a person can make the judgment “Linux is a better operating system than Vista” and yet not be a judgmental person. One can also judge that “Mary is a better computer programmer than Harry” without necessarily being judg- mental about either Mary or Harry. Being judgmental is a behavioral trait is sometimes exhibited by those who are strongly opinionated or who tend to speak disparagingly of anyone who holds a position on some topic that is different from their own. “Judging” in the sense of evaluating something, however, does not require that the person making the judgment be a judgmental person.

We routinely judge, or evaluate, others. We judge others whenever we decide whom we will pursue as friends, as lovers, or as colleagues. Judging is an integral part of social interaction. Without judgment at this level, we would not be able to form close

2.2 Discussion Stoppers as Roadblocks to Moral Discourse b 45

C023GXML 10/19/2012 20:22:35 Page 46

friendships, which we distinguish from mere acquaintances. And it would be difficult for us to make meaningful decisions about where we wish to live, work, recreate, and so forth.

Judgments Involving Condemnations vs. Judgments Involving Evaluations Why do we tend to be so uncomfortable with the notion of judging others? Part of our discomfort may have to do with how we currently understand the term “judge.” As we saw above, we need to be careful to separate the cognitive act of judging (i.e., making judgments about someone or something) from the behavioral trait of “being judgmental.” Consider the biblical injunction that instructs us to refrain from judging others in the sense of condemning them. In that sense of “judge” there would seem to be much wisdom in the biblical injunction.

However, there is also another sense of “judge” that means “evaluate,” which is something we are often required to do in our everyday lives. Consider some of the routine judgments, or evaluations, you make when deciding between competing options available to you in your day-to-day life. When you change jobs or purchase a house or an automobile, you make a judgment about which job, house, or automobile you believe is best for your purposes. When you chose the particular college or university that you are attending, you evaluated that particular institution relative to others.

There are also people employed in professions that require them tomake judgments. For example, professional sporting associations employ referees and field judges who make decisions or judgments concerning controversial plays. Judges evaluate contest entries to determine which entries are better than others. Think, for example, about the judging that typically occurs in selecting the winning photographs in a camera club contest. Or consider that when a supervisor writes a performance review for an employee, she is making a judgment about the employee’s performance.

Are We Ever Required to Make Judgments about Others? It could be argued that just because we happen to make judgments about others, it doesn’t necessarily follow that we ought to judge persons. However, there are certain occasions when we are not only justified in making judgments about others, but we are also morally obligated to do so. Consider, for instance, that in many societies an individual selects the person that he or she will marry, judging (evaluating) whether the person he or she is considering will be a suitable life-long partner in terms of plans, goals, aspirations, etc. In this case, failing to make such a judgment would be not only imprudent but also, arguably, immoral. It would be immoral because, in failing to make the appropriate judgments, one would not be granting his or her prospective spouse the kind of consideration that he or she deserves.

Next, consider an example involving child abuse. If you see an adult physically abusing a child in a public place by repeatedly kicking the child, can you not at least judge that the adult’s behavior is morally wrong even if you are uncomfortable with making a negative judgment about that particular adult?

Also consider a basic human-rights violation. If you witness members of a community being denied basic human rights, should you not judge that community’s practice as morally wrong? For example, if women in Afghanistan are denied education, medical treatment, and jobs solely on the grounds that they are women, is it wrong to make the

46 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:35 Page 47

judgment that such practices, as well as the system that permits those practices, are immoral?

So it would seem that some serious confusions exist with respect to two distinct situations: (1) someonemaking a judgment about X, and (2) someone being a judgmental person. With that distinction in mind, we can avoid being judgmental and yet still make moral judgments when appropriate, and especially when we are obligated to do so.

2.2.3 Discussion Stopper #3: Morality Is Simply a Private Matter

Many people assume that morality is essentially personal in nature and must, therefore, be simply a private matter. Initially, such a view might seem reasonable, but it is actually both confused and problematic. In fact, “private morality” is essentially an oxymoron, or contradictory notion. For one thing, morality is a public phenomenon—recall our discussion of Gert’s account of morality as a “public system” in Section 2.1.1, where we saw that a moral system includes a set of public rules that apply to all of the members of that system. Thus morality cannot be reduced to something that is simply private or personal.

We have already seen that morality is a system of normative rules and standards whose content is studied by ethicists in the same way that mathematicians study the content of the field of mathematics. Would it make sense to speak of personal mathe- matics, personal chemistry, or personal biology? Such notions sound absurd because each discipline has a content area and a set of standards and criteria, all of which are open and available to all to examine. Since public rules make up the content of a moral system, which itself can be studied, we can reasonably ask how it would make sense to speak of private morality.

If morality were simply a private matter, then it would follow that a study of morality could be reduced to a series of descriptive reports about the personal preferences or personal tastes of individuals and groups. But is such an account of morality adequate? Are the moral choices that we make nothing more than mere personal choices? If you happen to prefer chocolate ice cream and I prefer vanilla, or if you prefer to own a laptop computer and I prefer to own a desktop computer, we will probably not choose to debate these preferences. You may have strong personal beliefs as to why chocolate ice cream is better than vanilla and why laptop computers are superior to desktop computers; however, you will most likely respect my preferences for vanilla ice cream and desktop computers, and, in turn, I will respect your preferences.

Do moral choices fit this same kind of model? Suppose you happen to believe that stealing is morally wrong, but I believe that stealing is okay (i.e., morally permissible). One day, I decide to steal your laptop computer. Do you have a right to complain? You would not, if morality is simply a private matter that reflects an individual’s personal choices. Your personal preferencemay be not to steal, whereas my personal preference is for stealing. If morality is grounded simply in terms of the preferences that individuals happen to have, then it would follow that stealing ismorally permissible for me but is not for you. But why stop with stealing?What if I happen to believe that killing human beings is okay? So, you can probably see the dangerous implications for a system in which moral rules and standards are reducible to personal preferences and personal beliefs.

The view that morality is private and personal can quickly lead to a position that some ethicists describe as moral subjectivism. According to this position, what is morally

2.2 Discussion Stoppers as Roadblocks to Moral Discourse b 47

C023GXML 10/19/2012 20:22:36 Page 48

right or wrong can be determined by individuals themselves, so that morality would seem to be in the “eye of the beholder.” Moral subjectivism makes pointless any attempt to engage in meaningful ethical dialogue.

2.2.4 Discussion Stopper #4: Morality Is Simply a Matter for Individual Cultures to Decide

Some might assume that morality can best be understood not so much as a private or a personal matter but as something for individual cultures or specific groups to determine. According to this view, a moral system is dependent on, or relative to, a particular culture or group. Again, this view might initially seem quite reasonable; it is a position that many social scientists have found attractive. To understand some of the serious problems inherent in this position, it is useful to distinguish between cultural relativism and moral relativism.

Cultural Relativism Cultures play a crucial role in the transmission of the values and principles that constitute a moral system. It is through culture that initial beliefs involving morality are transmitted to an individual. In this sense cultures provide their members with what ethicists often refer to as “customary morality,” or conventional morality, where one’s moral beliefs are typically nonreflective (or perhaps prereflective). For example, if asked whether you believe that acts such as pirating software or invading someone’s privacy are wrong, youmight simply reply that both kinds of behavior are wrong because your society taught you that they are wrong. However, is it sufficient for one to believe that these actions are morally wrong merely because his or her culture says they are wrong? Imagine, for example, a culture in which the principle “Murder is wrong” is not transmitted to its members. Does it follow that murdering people would be morally permissible for the members of that culture?

The belief that morality is simply a matter for individual cultures to decide is widespread in our contemporary popular culture. This view is often referred to as cultural relativism, and at its base is the following assumption:

A. Different cultures have different beliefs about what constitutes morally right and wrong behavior.

Note that this assumption is essentially descriptive in nature, because it makes no normative judgment about either the belief systems of cultures or the behavior of people in those cultures. Although it is generally accepted that different cultures have different conceptions about what is morally right and morally wrong behavior, this position has been challenged by some social scientists who argue that some of the reported differences between cultures have been greatly exaggerated. Other social scientists suggest that all cultures may possess some universal core moral values.5

However, let us assume that claim (A) is true and ask whether it logically implies (B).

B. We should not morally evaluate the behavior of people in cultures other than our own (because different cultures have different belief systems about what con- stitutes morally right and wrong behavior).

48 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:36 Page 49

Note that (B) is a different kind of claim than (A). Also note that tomove from (A) to (B) is to move from cultural relativism to moral relativism.

Moral Relativism What are the differences between the two forms of relativism? We saw that cultural relativism is essentially a descriptive thesis, merely reporting that people’s moral beliefs vary from culture to culture. Moral relativism, on the contrary, is a normative thesis because it asserts that one should not make moral judgments about the behavior of people who live in cultures other than one’s own. However, critics point out that if moral relativists are correct, then any kind of behavior can be morally acceptable— provided that such behavior is approved by the majority of people in a particular culture.

Critics also note that the moral relativist’s reasoning is flawed. For example, they point out that sometimes it is appropriate for people to question certain kinds of behavioral practices, regardless of where those practices are carried out. Consider a specific case involving a practice in some cultures and tribes in West Africa, where a ritual of female circumcision is performed. Is it wrong for those living outside these cultures to question this practice from the perspective of morality or human rights? Although this practice has been a tradition for generations, some females living in tribes that still perform it on teenage girls have objected. Let us assume, however, that the majority of members of cultures that practice female circumcision approve it. Would it be inappropriate for those who lived outside of West Africa to question whether it is morally wrong to force some women to experience this ritual against their wishes? And if so, is it inappropriate (perhaps even morally wrong) to question the practice simply because the persons raising such questions are not members of the particular culture?

If we embrace that line of reasoning used by the moral relativist, does it follow that a culture can devise any moral scheme it wishes as long as the majority of its members approve it? If so, is moral relativism a plausible thesis? Perhaps the following scenario can help us to understand further the flawed reasoning in moral relativism.

rBecauseCultureA embracesmoral relativism, itmust be tolerant of all of Culture B’s practices and actions, as it would in the case of all cultures. Furthermore, CultureA cannot condemn the actions of Culture B, since, in the relativist’s view, moral judgments about Culture B can be made only by those who reside in that culture. So, Culture A cannot say that Culture B’s actions are morally wrong.

c SCENARIO 2–2: The Perils of Moral Relativism

Two cultures, Culture A and Culture B, adjoin each other geographically. The members of Culture A are fairly peaceful people, tolerant of the diverse beliefs found in all other cultures. And they believe that all cultures should essentially mind their own business when it comes to matters involving morality. Those in Culture B, on the contrary, dislike and are hostile to those outside their culture. Culture B has recently developed a new computer system for delivering chemical weapons that it plans to use in military attacks on other cultures, including Culture A. Since Culture A subscribes to the view of moral relativism, and thus must respect the views of all cultures with regard to their systems of moral beliefs, can it condemn, in a logically consistent manner, Culture B’s actions as immoral? &

2.2 Discussion Stoppers as Roadblocks to Moral Discourse b 49

C023GXML 10/19/2012 20:22:36 Page 50

Moral relativists can only say that Cultures A and B are different. They cannot say that one is better than another, or that the behavior in one is morally permissible while the other is morally impermissible. Consider that while the systems for treating Jews used by the Nazis and by the British in the 1940s were clearly different, moral relativists could not say, with any sense of logical consistency, that one system was morally superior to the other. In the same way, Culture B cannot be judged by Culture A to be engaging in morally wrong conduct even though Culture B wishes to destroy A and to kill all of its members. Perhaps you can see that there is a price to pay for being a moral relativist. Is that price worth paying?

Although moral relativism might initially seem attractive as an ethical position, we can now see why it is conceptually flawed. To debate moral issues, we need a conceptual and methodological framework that can provide us with impartial and objective criteria to guide us in our deliberations. Otherwise, ethical debate might quickly reduce to a shouting match in which those with the loudest voices or, perhaps worse yet, those with the “biggest sticks” win the day.

Moral Absolutism and Moral Objectivism Why is moral relativism so attractive to so many people, despite its logical flaws? Pojman (2006) notes thatmany people tend to assume that if they rejectmoral relativism, theymust automatically endorse some form of moral absolutism. But do they necessarily need to make an either/or choice here? Pojman and others believe that it is possible to hold a view called ethical objectivism, which is between the two extremes.6Recall our earlier distinction between ethicists andmoralists at the end of Section 2.2; the group that we identified there as moralists are similar to moral absolutists in that both believe they have all of the correct answers for everymoral question.Whereas absolutists argue that there is only oneuniquely correct answer to every moral question, moral relativists assume that there are no universally correct answers to any moral questions. Moral objectivists disagree with both positions; they disagree with absolutists by pointing out that there can be more than one acceptable answer to some moral questions, despite the fact that most cultures agree on the answers to manymoral issues. For example, we saw that there is considerable agreement across cultures on principles such as “murder is morally wrong” and that “stealing is morally wrong.” However, objectivists also acknowledge that reasonable people can nonetheless disagree on what the correct answers are to somemoral questions.

Objectivists also differ from relativists in at least one important respect. Relativists suggest that any answer to a moral question can be appropriate, as long the majority in a culture hold that view. Objectivists such as Gert (2005, 2007) counter by arguing that even if there is no uniquely correct answer to every moral question, there are nonetheless many incorrect answers to some of these questions.7 To illustrate this point, consider an analogy involving a normative dispute that happens to be nonmoral in nature—viz., a debate about who was the greatest baseball player of all time. Reasonable people could disagree on the correct answer to this normative question. For example, some might argue that it was Babe Ruth or Hank Aaron; others could reasonably claim that it was Ty Cobb or Joe DiMaggio. All four answers are objectively plausible. But someone could not reasonably defend the claim that the best baseball player was Danny Ainge or Stan Papi, since those answers are clearly unacceptable (even if we, as individuals, happen to like these former baseball players). So, there are definitely some wrong answers to this normative question, and thus we cannot endorse the “anything goes” view of relativists in

50 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:36 Page 51

defending a rational answer to the question concerning the greatest baseball player of all time. The rationale used in this scenario can be extended to the analysis of normative questions that are moral in nature.

We can now see how moral objectivism offers an alternative to the extreme views of moral relativism and moral absolutism. Unlike moral absolutism, objectivism allows for a plurality of plausible answers to some controversial moral questions, provided that certain rational criteria are satisfied. But unlike relativists, objectivists would not find every answer acceptable, because some answers would fall outside the criteria of (rationally defensible) moral behavior, in the same way that some answers fell outside the criteria for rationally acceptable answers to the normative question about the greatest baseball player. Because moral objectivism allows for the possibility that there may be more than one (rationally) acceptable answer to at least some moral questions, it is compatible with a view that some call “ethical pluralism” (Ess 2006). Although objectiv- ism and pluralism do not entail moral relativism, they allow for multiple ethical theories—provided, of course, that those theories satisfy objective criteria. Because relativism fails to satisfy such criteria, however, it cannot be included in the list of “objective” ethical theories we will examine (such as utilitarianism, deontology, etc.) in the remaining sections of this chapter.

Fortunately, ethical theory can provide us with criteria for objectively analyzing moral issues so that we can avoid the problems of moral relativism without having to endorse moral absolutism. Before proceeding directly to our discussion of ethical theories, however, it would be useful to summarize some of the key points in our analysis of the four discussion stoppers. Table 2.2 summarizes these points.

TABLE 2.2 Summary of Logical Flaws in the Discussion Stoppers

Stopper #1 Stopper #2 Stopper #3 Stopper #4

People disagree on solutions to moral issues.

Who am I to judge others?

Ethics is simply a private matter.

Morality is simply a matter for individual cultures to decide.

1. Fails to recognize that experts in many areas disagree on key issues in their fields.

1. Fails to distinguish between the act of judging and being a judgmental person.

1. Fails to recognize that morality is essentially a public system.

1. Fails to distinguish between descriptive and normative claims about morality.

2. Fails to recognize that there are many moral issues on which people agree.

2. Fails to distinguish between judging as condemning and judging as evaluating.

2. Fails to note that personally based morality can cause major harm to others.

2. Assumes that people can never reach common agreement on some moral principles.

3. Fails to distinguish between disagreements about principles and disagreements about facts.

3. Fails to recognize that sometimes we are required to make judgments.

3. Confuses moral choices with individual or personal preferences.

3. Assumes that a system is moral because a majority in a culture decides it is moral.

2.2 Discussion Stoppers as Roadblocks to Moral Discourse b 51

C023GXML 10/19/2012 20:22:36 Page 52

c 2.3 WHY DOWE NEED ETHICAL THEORIES?

In our analysis of the four discussion stoppers, we saw some of the obstacles that we encounter when we debate moral issues. Fortunately, there are ethical theories that can guide us in our analysis of moral issues involving cybertechnology. But why do we need something as formal as ethical theory? An essential feature of theories in general is that they guide us in our investigations and analyses. Science uses theory to provide us with general principles and structures with which we can analyze our data. Ethical theory, like scientific theory, provides us with a framework for analyzing moral issues via a scheme that is internally coherent and consistent as well as comprehensive and systematic. To be coherent, a theory’s individual elements must fit together to form a unified whole. To be consistent, a theory’s component parts cannot contradict each other. To be compre- hensive, a theory must be able to be applied broadly to a wide range of actions. And to be systematic, a theory cannot simply address individual symptoms peculiar to specific cases while ignoring general principles that would apply in similar cases.

Recall our brief analysis of the moral dilemma involving the runaway trolley (Scenario 2–1) in the opening section of this chapter. There we saw how easy it might be for a person to use two different, and seemingly inconsistent, forms of reasoning in resolving the dilemma, depending on whether that person was driving the trolley or merely observing it as a bystander on a bridge. Of course, we might be inclined to think that it is fine to flip-flop onmoral decisions, since many people seem to do this much of the time. But philosophers and logicians in general, and ethicists in particular, point out many of the problems that can arise with inconsistent reasoning about moral issues.

Some critics, however, might be inclined to respond that philosophers and ethicists often dream up preposterous moral dilemmas, such as the trolley case, to complicate our decision-making process. Yet, the trolley scenario may not be as far-fetched as some critics might assume. Consider that classic dilemmas involving humans in general, and human drivers of vehicles in particular, will likely take on even more significance in the near future when human drivers of commercial vehicles are replaced by computer systems, which are typically referred to as “autonomous systems.” In fact, the transport systems connecting terminal buildings in some large airports are now operated by (“driverless”) autonomous systems. (In Chapter 12, we examine some specific challenges we will need to face as autonomous systems replace more and more humans who currently drive commercial vehicles.)

Next consider a slight variation or twist in Scenario 2–1. Imagine that a “driverless” trolley—i.e., a trolley being “driven” by an autonomous computer system—is in the same predicament as the one facing the human driver described in that scenario.8 If you were a software engineer or a member of the team developing the computer system designed to “drive” this trolley, what kind of “ethical-decision-making” instructions would you rec- ommend be built into the autonomous system? Should the autonomous computer system be instructed (i.e., programmed) to reason in a way that it would likely reach a decision to “throw the switch” to save five humans who otherwise would die (as a result of the failed braking system), thus steering the trolley instead in a direction that will intentionally kill one human? In other words, should the “computerized driver” be embedded mainly (or perhaps even exclusively) with a programming code that would influence (what we earlier called) consequentialist- or utilitarian-like moral-decision making? Alternatively, should programming code that would support non-consequentialist decision-making

52 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:36 Page 53

considerations also bebuilt into this autonomous system.Wepostponeouranalysis of these kinds of questions (involving “machine ethics”) until Chapter 12; for now, we focus on challenges that ordinary humans have in determining how to apply ethical theories in their deliberations.

Next imagine that as a result of an accident (involving a runaway trolley), five people are rushed to the hospital. Each patient, whose condition is “critical,” is in need of a vital human organ to live, and there is not sufficient time to get these organs from a transplant- donor bank located outside the hospital.Also, the hospital happens to be understaffedwith surgeons at the time the accident victims are admitted to the emergency ward. So amedical physician (Dr. Smith) on duty at the hospital, who is administering a post-surgery physical exam to a patient in one room, is suddenly called into the emergency room. Dr. Smith determines that one patient needs a heart, and another a kidney; a third patient needs a liver; a fourth, a pancreas; and a fifth, a pair of lungs. Smith also determines that unless the victims receive the organ transplants immediately, each will die. Then it suddenly occurs to Dr. Smith that the hospital patient onwhomhehad been conducting the physical exam is in excellent health. If the healthy patient’s organs were removed and immediately given to each accident victim, all fivewould live.Of course, the healthy patient would die as a result. But the net effectwould be that fourmore humanswould live.What should Smith do in this case? What would you do if you were in the doctor’s shoes?

As you have probably determined at this point, it is helpful to have in place a systematic, comprehensive, coherent, and consistent set of principle or rules to guide us in our moral decisions. To that end, various kinds of ethical theories have been developed. We next examine four standard types of ethical theories: consequence-based, duty-based, contract-based, and character-based.

c 2.4 CONSEQUENCE-BASED ETHICAL THEORIES

Some have argued that the primary goal of a moral system is to produce desirable consequences or outcomes for its members. For these ethicists, the consequences (i.e., the ends achieved) of actions and policies provide the ultimate standard against which moral decisions must be evaluated. So if one must choose between two courses of action— that is, either “Act A” or “Act B”—the morally correct action will be the one that produces the most desirable outcome. Of course, we can further ask the question, “Whose outcome” (i.e., “the most desirable outcome for whom”)? Utilitarians argue that the outcome or consequences for the greatest number of individuals, or the majority, in a given society is paramount in moral deliberation. According to the utilitarian theory,

An individual act (X) or a social policy (Y) is morally permissible if the consequences that result from (X) or (Y) produce the greatest amount of good for the greatest number of persons affected by the act or policy.

Utilitarians stress the “social utility” or social usefulness of particular actions and policies by focusing on the consequences that result from those actions and policies. Jeremy Bentham (1748–1832), who was among the first philosophers to formulate utilitarian ethical theory in a systematic manner, defended this theory via two claims:

I. Social utility is superior to alternative criteria for evaluating moral systems.

II. Social utility can be measured by the amount of happiness produced.

2.4 Consequence-Based Ethical Theories b 53

C023GXML 10/19/2012 20:22:36 Page 54

According to (I), the moral value of actions and policies ought to be measured in terms of their social usefulness (rather than via abstract criteria such as individual rights or social justice). The more utility that specific actions and policies have, the more they can be defended as morally permissible actions and policies. In other words, if Policy Y encourages the development of a certain kind of computer software, which in turn would produce more jobs and higher incomes for those living in Community X, then Policy Y would be considered more socially useful and thus the morally correct policy. But how do we measure overall social utility? That is, which criterion can we use to determine the social usefulness of an act or a policy? The answer to this question can be found in (II), which has to do with happiness.

Bentham argued that nature has placed us under two masters, or sovereigns: pleasure and pain. We naturally desire to avoid pain and to seek pleasure or happiness. However, Bentham believed that it is not the maximization of individual pleasure or happiness that is important, but rather generating the greatest amount of happiness for society in general. Since it is assumed that all humans, as individuals, desire happiness, it would follow on utilitarian grounds that those actions and policies that generate the most happiness for the most people are most desirable. Of course, this reasoning assumes:

a. All people desire happiness.

b. Happiness is an intrinsic good that is desired for its own sake.

We can ask utilitarians what proof they have for either (a) or (b). John Stuart Mill (1806–1873) offered the following argument for (a):

The only possible proof showing that something is audible is that people actually hear it; the only possible proof that something is visible is that people actually see it; and the only possible proof that something is desirable is that people actually desire it.

From the fact that people desire happiness, Mill inferred that promoting happiness ought to be the criterion for justifying a moral system. Unlike other goods that humans desire as means to one or more ends, Mill argued that people desire happiness for its own sake. Thus, he concluded that happiness is an intrinsic good. (Recall our earlier discussion of intrinsic values in Section 2.1.2.)

You might consider applying Mill’s line of reasoning to some of your own goals and desires. For example, if someone asked why you are taking a particular college course (such as a course in cyberethics), you might respond that you need to satisfy three credit hours of course work in your major field of study or in your general education requirements. If you were then asked why you need to satisfy those credit hours, you might respond that you would like to earn a college degree. If next someone asks you why youwish to graduate from college, youmight reply that you wish to get a good-paying job. If you are then asked why you want a good-paying job, your response might be that you wish to purchase a home and that you would like to be able to save some money. If asked why again, you might reply that saving money would contribute to your long-term financial and emotional security. And if further asked why you want to be financially and emotionally secure, you might respond that ultimately you want to be happy. So, following this line of reasoning, utilitarians conclude that happiness is an intrinsic good—that is, something that is good in and of itself, for its own sake, and not merely a means to some further end or ends.

54 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:36 Page 55

2.4.1 Act Utilitarianism

Wenoted above that utilitarians look at the expected outcomes or consequences of an act to determine whether or not that act is morally permissible. However, some critics point out that because utilitarianism tends to focus simply on the roles that individual acts and policies play in producing the overall social good (the greatest good for the greatest number), it is conceptually flawed. Consider a hypothetical scenario in which a new controversial policy is being debated.

The above scenario illustrates a major flaw in at least one version of utilitarianism, viz., act utilitarianism. According to act utilitarians,

An act, X, is morally permissible if the consequences produced by doing X result in the greatest good for the greatest number of persons affected by Act X.

All things being equal, actions that produce the greatest good (happiness) for the greatest number of people seem desirable. However, policies and practices based solely on this principle can also have significant negative implications for those who are not in the majority (i.e., the greatest number). Consider the plight of the unfortunate few who are enslaved in the computer chip-processing plant in the above scenario. Because of the possibility that such bizarre cases could occur, some critics who embrace the goals of utilitarianism in general reject act utilitarianism.

Critics who reject the emphasis on the consequences of individual acts point out that in our day-to-day activities we tend not to deliberate on each individual action as if that action were unique. Rather, we are inclined to deliberate on the basis of certain principles or general rules that guide our behavior. For example, consider some principles that may guide your behavior as a consumer. Each time that you enter a computer store, do you ask yourself, “Shall I steal this particular software game in this particular store at this particular time?” Or have you already formulated certain general principles that guide your individual actions, such as: it is never morally permissible to steal? In the latter case, you are operating at the level of a rule or principle rather than deliberating at the level of individual actions.

2.4.2 Rule Utilitarianism

Some utilitarians argue that the consequences that result from following rules or principles, not the consequences of individual acts, ultimately matter in determining

c SCENARIO 2–3:A Controversial Policy in Newmerica

A policy is under consideration in a legislative body in the nation of Newmerica, where 1% of the population would be forced to work as slaves in amanufacturing facility to produce computer chips. Proponents of this policy argue that, if enacted into law, it would result in lower prices for electronic devices for consumers in Newmerica. They argue that it would also likely result in more overall happiness for the nation’s citizens because the remaining 99% of the population, who are not enslaved, would be able to purchase electronic devices and other computer-based products at a much lower price. Hence, 99% of Newmerica’s population benefit at the expense of the remaining 1%. This policy clearly seems consistent with the principle of producing the greatest good for the greatest number of Newmerica’s population, but should it be enacted into law? &

2.4 Consequence-Based Ethical Theories b 55

C023GXML 10/19/2012 20:22:36 Page 56

whether or not a certain practice is morally permissible. This version of utilitarian theory, called rule utilitarianism, can be formulated in the following way:

An act, X, is morally permissible if the consequences of following the general rule, Y, of which act X is an instance, would bring about the greatest good for the greatest number.

Note that here we are looking at the consequences that result from following certain kinds of rules as opposed to consequences resulting from performing individual acts. Rule utilitarianism eliminates as morally permissible those cases in which 1% of the population is enslaved so that the majority (the remaining 99%) can prosper. Rule utilitarians believe that policies that permit the unjust exploitation of the minority by the majority will also likely have overall negative social consequences and thus will not be consistent with the principal criterion of utilitarian ethical theory.

How would a rule utilitarian reason in the case of the trolley accident involving five victims (described in the preceding section) each of whom needs an organ transplant to survive? For an (extreme) act utilitarian, the decision might be quite simple: remove the five organs from the one healthy patient (even though he will die) so that five humans who otherwise would die could now live. But would a rule utilitarian see this particular action as justifiable on rule-utilitarian grounds—i.e., could it form the basis for an acceptable policy (in general) for hospitals and medical facilities?

Imagine a society in which it is possible for a person to report to a medical center for a routinephysical examonly todiscover that his orhervital organs couldbe removed inorder to save a greater number of people.Would anyonebewilling to submit to a routine physical exam in such a society?Of course, a rule utilitarian could easily reject such a practice on the following grounds: Policies that can intentionally cause the death of an innocent individual ought not to be allowed, even if the net result of following such policies meant that more human lives would be saved. For one thing, such a policy would seem unfair to all who are adversely affected. But perhaps more importantly from a rule utilitarian’s perspective, adopting such a policy would not result in the greatest good for society.

Rule utilitarianism would seem to be a more plausible ethical theory than act utilitarianism. However, some critics reject all versions of utilitarianism because they believe that no matter how this theory is expressed, utilitarianism is fundamentally flawed. These critics tend to attack one or both of the following aspects of utilitarian theory:

I. Morality is basically tied to the production of happiness or pleasure.

II. Morality can ultimately be decided by consequences (of either acts or policies).

Critics of utilitarianism argue that morality can be grounded neither in consequences nor in happiness. Hence, they argue that some alternative criterion or standard is needed.

c 2.5 DUTY-BASED ETHICAL THEORIES

Immanuel Kant (1724–1804) argued that morality must ultimately be grounded in the concept of duty, or obligations that humans have to one another, and never in the conse- quences of human actions. As such, morality has nothing to do with the promotion of happiness or the achievement of desirable consequences. Thus Kant rejects utilitarianism

56 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:36 Page 57

in particular, and all consequentialist ethical theories in general.He points out that, in some instances, performing our duties may result in our being unhappy and may not necessarily lead to consequences that are considereddesirable. Theories inwhich thenotion of duty, or obligation, serves as the foundation for morality are called deontological theories because they derive their meaning from the Greek root deon, which means duty. How can a deontological theory avoid the problems that plague consequentialist theories such as utilitarianism? Kant provides two answers to this question, one based on our nature as rational creatures, and the other based on the notion that human beings are ends-in- themselves. We briefly consider each of Kant’s arguments.

What does Kant mean when he says that humans have a rational nature? Kant argues that what separates us from other kinds of creatures, and what binds us morally, is our rational capacity. Unlike animals who may be motivated only by sensory pleasure, humans have the ability to reason and deliberate. So Kant reasons that if our primary nature were such that we merely seek happiness or pleasure, as utilitarians suggest, then we would not be distinguishable from other creatures in morally relevant ways. But because we have a rational capacity, we are able to reflect upon situations and make moral choices in a way that other kinds of (nonrational) creatures cannot. Kant argues that our rational nature reveals to us that we have certain duties or obligations to each other as “rational beings” in a moral community.

We can next examine Kant’s second argument, which concerns the roles of human beings as ends-in-themselves. We have seen that in focusing on criteria involving the happiness of the majority, utilitarians allow, even if unintentionally, that the interests and well-being of some humans can be sacrificed for the ends of the greatest number. Kant argues that a genuinely moral system would never permit some humans to be treated simply as means to the ends of others. He also believes that if we are willing to use a standard based on consequences (such as social utility) to ground our moral system, then that system will ultimately fail to be a moral system. Kant argues that each individual, regardless of his or her wealth, intelligence, privilege, or circumstance, has the same moral worth. From this, Kant infers that each individual is an end in him- or herself and, therefore, should never be treatedmerely as ameans to some end. Thus we have a duty to treat fellow humans as ends.

2.5.1 Rule Deontology

Is there a rule or principle that can be used in an objective and impartial way to determine the basis for our moral obligations? For Kant, there is such a standard or objective test, which can be formulated in a principle that he calls the categorical imperative. Kant’s imperative has a number of variations, and we will briefly examine two of them. One variation of his imperative directs us to

Act always on that maxim or principle (or rule) that ensures that all individuals will be treated as ends-in-themselves and never merely as a means to an end.

Another variation of the categorical imperative can be expressed in the followingway:

Act always on that maxim or principle (or rule) that can be universally binding, without exception, for all human beings.9

2.5 Duty-Based Ethical Theories b 57

C023GXML 10/19/2012 20:22:36 Page 58

Kant believed that if everyone followed the categorical imperative, we would have a genuinely moral system. It would be a system based on two essential principles: universality and impartiality. In such a system, every individual would be treated fairly since the same rules would apply universally to all persons. And because Kant’s imperative observes the principle of impartiality, it does not allow for one individual or group to be privileged or favored over another. In other words, if it is morally wrong for you to engage in a certain action, then it is also morally wrong for all persons like you—that is, all rational creatures (or moral agents)—to engage in that action. And if you are obligated to perform a certain action, then every moral agent is likewise obligated to perform that action. To illustrate Kant’s points about the role that universal principles play in a moral system, consider the following scenario.

On deontological grounds, Bill can only make an exception for himself if everyone else (in this case, every other student in Bill’s class) had the right to make exceptions for him- or herself as well. But if everyone did that, then what would happen to the very notion of following rules in a society? Kant believed that if everyone decided that he or she could make an exception for him- or herself whenever it was convenient to do so, we couldn’t even have practices such as promise keeping and truth telling. For those practices to work, they must be universalizable (i.e., apply to all persons equally) and impartial. When we make exceptions for ourselves, we violate the principle of im- partiality, and we treat others as means to our ends.

In Kant’s deontological scheme, we do not consider the potential consequences of a certain action or of a certain rule to determine whether that act is morally permissible. Rather, the objective rule to be followed—that is, the litmus test for determining when an action will have moral worth—is whether the act complies with the categorical imperative.

For a deontologist such as Kant, enslaving humans would always be immoral, regardless of whether the practice of having slaves might result in greater social utility for the majority (e.g., being able to purchase consumer products at a lower price) than the practice of not allowing slavery. The practice of slavery is immoral, not because it might have negative social consequences in the long term, but because

a. it allows some humans to be used only as a means to an end; and

b. a practice such as slavery could not be consistently applied in an objective, impartial, and universally binding way.

c SCENARIO 2–4:Making an Exception for Oneself

Bill, a student at Technical University, approaches his philosophy instructor, Professor Kanting, after class one day to turn in a paper that is past due. Professor Kanting informs Bill that since the paper is late, he is not sure that he will accept it. But Bill replies to Professor Kanting in a way that suggests that he is actually doing his professor a favor by turning in the paper late. Bill reasons that if he had turned in the paper when it was due, Professor Kanting would have been swamped with papers. Now, however, Kanting will be able to read Bill’s paper in a much more leisurely manner, without having the stress of somany papers to grade at once. Professor Kanting then tells Bill that he appreciates his concern about his professor’s well being, but he asks Bill to reflect a bit on his rationale in this incident. Specifically, Kanting asks Bill to imagine a case in which all of the students in his class, fearing that their professor would be overwhelmed with papers arriving at the same time, decided to turn their papers in one week late. &

58 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:36 Page 59

Kant would ask, for example, whether we could consistently impose a universal maxim that would allow slavery. He believed that we could not consistently (in a logically coherent sense) formulate such a principle that would apply to all humans, unless we also were willing to be subject to slavery. If we allow for the practice that some individuals can be enslaved but not others, then we would be allowing for exceptions to the moral rule. We would also allow some individuals to be used merely as a means to the ends of others rather than having a system in which all humans are treated as ends-in-themselves.

Although Kant’s version of deontological ethics avoids many of the difficulties of utilitarianism, it, too, has been criticized as an inadequate ethical theory. Critics point out, for example, that even if Kant’s categorical imperative provides us with the ultimate test for determining when some particular course of action is our duty, it will not help us in cases where we have two or more conflicting duties. Consider that, in Kant’s system, we have duties both to keep promises and tell the truth. Thus, acts such as telling a lie or breaking a promise can never be morally permissible. However, Kant’s critics point out that sometimes we encounter situations in which we are required either to tell the truth and break a promise or to keep a promise and tell a lie. In these cases, we encounter genuine moral dilemmas. Kant’s deontological theory does not provide us with a mechanism for resolving such conflicts.

2.5.2 Act Deontology

Although Kant’s version of deontology has at least one significant flaw, some philoso- phers believe that a deontological account of morality is nonetheless the correct kind of ethical theory. They also believe that a deontological ethical theory can be formulated in a way that avoids the charges of Kant’s critics. One attempt at reformulating this theory was made by David Ross (1930). Ross rejects utilitarianism for many of the same reasons that Kant does. However, Ross also believes that Kant’s version of deontology is not fully adequate.

Ross argues that when two or more moral duties clash, we have to look at individual situations in order to determine which duty will override another. Like act utilitarians, then, Ross stresses the importance of analyzing individual situations to determine the morally appropriate course of action to take. Unlike utilitarians, however, Ross believes that we must not consider the consequences of those actions in deliberating over which course of action morally trumps, or outweighs, another. Like Kant, Ross believes that the notion of duty is the ultimate criterion for determining morality. But unlike Kant, Ross does not believe that blind adherence to certain maxims or rules can work in every case for determining which duties we must ultimately carry out.

Ross believes that we have certain prima facie (or self-evident) duties, which, all things being equal, we must follow. He provides a list of prima facie duties such as honesty, benevolence, justice, and so forth. For example, each of us has a prima facie duty not to lie and a prima facie duty to keep a promise. And if there are no conflicts in a given situation, then each prima facie duty is also what he calls an actual duty. But how are we to determine what our actual duty is in situations where two or more prima facie duties conflict with one another? Ross believes that our ability to determine what our actual duty will be in a particular situation is made possible through a process of “rational intuitionism” (similar to the one used in mathematics).10

2.5 Duty-Based Ethical Theories b 59

C023GXML 10/19/2012 20:22:36 Page 60

We saw that for Kant, every prima facie duty is, in effect, an absolute duty because it applies to every human being without exception. We also saw that Kant’s scheme does not provide a procedure for deciding what we should do when two ormore duties conflict. However, Ross believes that we can determine what our overriding duty is in such situations by using a deliberative process that requires two steps:

a. Reflect on the competing prima facie duties.

b. Weigh the evidence at hand to determine which course of action would be required in a particular circumstance.

The following scenario illustrates how Ross’s procedure can be carried out.

All things being equal, you have a moral obligation to keep your promise to your friend. You also have a moral obligation to visit your grandmother in the hospital. On both counts, Kant and Ross are in agreement. But what should we do when the two obligations conflict? For a rule deontologist like Kant, the answer is unclear as to what you should do in this scenario, since you have two absolute duties. For Ross, however, the following procedure for deliberation is used. You would have to weigh between the two prima facie duties in question to determine which will be your actual duty in this particular circumstance. In weighing between the two conflicting duties, your actual duty in this situation would be to visit your grandmother, which means, of course, that you would have to break your promise to your friend. However, in a different kind of situation involving a conflict of the same two duties, your actual duty might be to keep the promise made to your friend and not visit your grandmother in the hospital.

Notice that in cases of weighing between conflicting duties, Ross places the emphasis of deliberation on certain aspects of the particular situation or context, rather than on mere deliberation about the general rules themselves. Unlike utilitarians, however, Ross does not appeal to the consequences of either actions or rules in determining whether a particular course of action is morally acceptable. For one thing, Ross argues that he would have to be omniscient to know what consequences would result from his actions. So, like all deontologists, Ross rejects the criteria of consequences as a viable one for resolving ethical dilemmas.

One difficulty for Ross’s position is that, as noted above, it uses a process called “rational intuitionism.”Appealing to the intuitive process used in mathematics to justify certain basic mathematical concepts and axioms, Ross believes that the same process can be used in morality. However, his position on moral intuitionism is controversial and has not been widely accepted by contemporary ethicists. And since intuitionism is an important component in Ross’s theory of act deontology, many ethicists who otherwise

c SCENARIO 2–5:ADilemma Involving Conflicting Duties

You promise to meet a classmate one evening at 7:00 in the college library to study together for a midterm exam for a computer science course you are taking.While driving in your car to the library, you receive a call on your cell phone informing you that your grandmother has been taken to the hospital and that you should go immediately to the hospital. You consider calling your classmate from your car, but you realize that you don’t have his phone number. You also realize that you don’t have time to try to reach your classmate by e-mail. What should you do in this case? &

60 c Chapter 2. Ethical Concepts and Ethical Theories: Establishing and Justifying a Moral System

C023GXML 10/19/2012 20:22:36 Page 61

might be inclined to adopt Ross’s theory have been skeptical of it. Nevertheless, variations of that theory have been adopted by contemporary deontologists.

Figure 2.3 summarizes key features that differentiate act and rule utilitarianism and act and rule deontology.

c 2.6 CONTRACT-BASED ETHICAL THEORIES

During the past two centuries, consequence-based and duty-based ethical theories have tended to receive the most attention from philosophers and ethicists. However, other kinds of ethical theories, such as those that emphasize criteria involving social contracts and individual rights, have recently begun to receive some serious attention as well.

From the perspective of some social contract theories, a moral system comes into being by virtue of certain contractual agreements between individuals. One of the earliest formal versions of a contract-based ethical theory can be found in the writings of Thomas Hobbes (1588–1679). In his classic work Leviathan, Hobbes describes an original “premoral” state that he calls the “state of nature.” It is premoral because there are nomoral (or legal) rules yet in existence. In this state, each individual is free to act in ways that satisfy his or her own natural desires. According to Hobbes, our natural (or physical) constitution is such that in the state of nature we act in ways that will enable us to satisfy our desires (or appetites) and to avoid what Hobbes calls our “aversions.”While there is a sense of freedom in this natural state, the condition of our day-to-day existence is hardly ideal. In this state, each person must continually fend for herself, and, as a result, each must also avoid the constant threats of others, who are inclined to pursue their own interests and desires.

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Best Coursework Help
Quick Mentor
Instant Assignment Writer
Top Writing Guru
Pro Writer
Innovative Writer
Writer Writer Name Offer Chat
Best Coursework Help

ONLINE

Best Coursework Help

I have worked on wide variety of research papers including; Analytical research paper, Argumentative research paper, Interpretative research, experimental research etc.

$18 Chat With Writer
Quick Mentor

ONLINE

Quick Mentor

I have read your project details and I can provide you QUALITY WORK within your given timeline and budget.

$17 Chat With Writer
Instant Assignment Writer

ONLINE

Instant Assignment Writer

I am a professional and experienced writer and I have written research reports, proposals, essays, thesis and dissertations on a variety of topics.

$42 Chat With Writer
Top Writing Guru

ONLINE

Top Writing Guru

I am an academic and research writer with having an MBA degree in business and finance. I have written many business reports on several topics and am well aware of all academic referencing styles.

$43 Chat With Writer
Pro Writer

ONLINE

Pro Writer

I have done dissertations, thesis, reports related to these topics, and I cover all the CHAPTERS accordingly and provide proper updates on the project.

$17 Chat With Writer
Innovative Writer

ONLINE

Innovative Writer

I am an experienced researcher here with master education. After reading your posting, I feel, you need an expert research writer to complete your project.Thank You

$44 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Example of a business report - How does perdue treat their chickens - Hx hpf autotransformer ballast - Food safety and hygiene procedures - 1999 ron howard satire crossword - Type a blood punnett square - What is a threat abatement plan - Three levels of computer ethics - Ford pats transceiver replacement - Nine basic physical abilities - Bath uni accommodation map - Howard bank rising sun md - How to draw a hexagon - Busselton community landcare nursery - Steep turns common errors - Colleges prepare for life freeman hrabowski - Ausgrid power outage engadine - Team contract template in project management - Guo nian hao translation - Linc tasmania ebooks plus - Tiaa diss - Ob in action case study answers - Bikini body meal plan - Http time com 8515 what the world eats hungry planet - Copper hydroxide and sodium nitrate heated - Describe your teaching and tutoring experience chegg - How to write problem and solution essay - HA599 Unit 4 Discussion - Twelve angry jurors play script - Hassan optics kuwait branches - Native son vocabulary - Hw module session 1 0 - A project has the following cash flows - Romeo and juliet act 1 study guide answers - Building Products Inc. Alternative working schedule - Kfc industry classification - Please see question in attachment - Sweatt v painter definition - Trader Joe’s case study - Smart recovery abc worksheet - What is a prodrug - Eco 110 discussion question answer - Case Analysis - A short story with phrasal verbs - Everything's an argument chapter 13 - Quality management principles ppt - Response - Accounting for merchandising business chapter 6 - Frimley a&e waiting time - The stanford prison experiment hypothesis - An air conditioning manufacturer produces room air - Night watch system definition - Business Ethics Case study - Nick nolte wikipedia the free encyclopedia - Story bluffing questions and answers - Benefits of what if analysis - My cqu student portal - Just eat it a food waste story questions - Discussion question - Solution focused therapy case study - Stock date codes and rotation labels - PAPER - Persuasive speech topics for athletes - FLEABAG ESSAY - Niddrie mill primary school edinburgh - Ibp conscientious eating - Revocation of postal acceptance - What is a wall of fire rising about - How to write a full sentence outline for a speech - Building shared services at rr communications case study - Discussion Post- due 11/1 - Emerging Threats counterness - Problem preparing a payroll register - The thinker's guide to critical thinking - The hollow men form - Edwin lemert described primary deviance as - Hazard assessment checklist sample - Persuasive essay outline worksheet - Project scope of software development example - Esso petroleum co ltd v mardon - Etc sensor dimmer modules - How to make a gpa calculator in visual basic - 1045 steel hardness rockwell c - 4 page paper - Gas properties simulation - Nib ambulance only cover - Work breakdown structure examples restaurant - Coca cola case study solution - What does rcd mean in penn foster - Operations management william j stevenson pdf - The out of control interview case study answers - Islam religion assignment - Virus explorer click and learn answer key - Edmonton japanese community association - 3 to 8 decoder active high - 2014 ap macroeconomics free response answers - The coca cola company motivate - Managing employee benefits ppt - Lion king cleveland playhouse square - Secl annual report 2019-20