What Is Cyberethics?
Cyberethics is the study of moral, legal, and social issues involving cybertechnology.
As a field of applied ethics, it:
examines the impact that cybertechnology has for our social, legal, and moral systems.
evaluates the social policies and laws that we frame in response to issues generated by the development and use of cybertechnology.
What Is Cybertechnology?
Cybertechnology refers to a wide range of computing and communications devices
– from standalone computers, to "connected" or networked computing and communications technologies, to the Internet itself.
Cybertechnologies include (but are not limited to): digital hand-held devices (including PDSa);
networked computers (desktops and laptops);
stand-alone computers.
Cybertechnology (Continued)
Networked devices can be connected directly to the Internet.
They also can be connected to other devices through one or more privately owned computer networks.
Privately owned networks include both: Local Area Networks (LANs),
Wide Area Networks (WANs).
Why the term cyberethics?
Cyberethics is a more accurate label than computer ethics, which can suggest the study of ethical issues limited either to:
computing machines,
computing professionals.
Cyberethics is also more accurate than Internet ethics, which is limited only to ethical issues affecting computer networks.
The Evolution of Cybertechnology and Cyberethics: Four Phases
Computer technology emerged in the late 1940s, when some analysts confidently predicted that no more than six computers would ever need to be built.
The first phase of computing technology (1950s and 1960s) consisted mainly of huge mainframe computers that were unconnected (i.e., stand-alone machines).
One ethical/social question that arose during Phase 1 dealt with the impact of computing machines as “giant brains” and what that meant for being human.
Another question raised during this phase concerned privacy threats and the fear of Big Brother.
The Evolution of Cybertechnology and Cyberethics (Continued)
In Phase 2 (1970s and 1980s), computing machines and communications devices began to converge.
Mainframe computers and personal computers could be linked together via privately owned networks such as LANs and WANs.
Privacy concerns arose because confidential information could easily be exchanged between networked databases.
Intellectual property issues emerged because personal computers could easily duplicate proprietary software programs.
Computer crime was possible because people could break into the computers of large organizations.
The Evolution of Cybertechnology and Cyberethics (Continued)
During Phase 3 (1990-present), the availability of Internet access to the general public has increased significantly.
This has been facilitated by the phenomenal growth of the World Wide Web.
The proliferation of Internet- and Web-based technologies in this phase has raised ethical and social concerns affecting:
free speech,
anonymity,
jurisdiction.
The Evolution of Cybertechnology and Cyberethics (Continued)
As cybertechnology evolves in Phase 4, computers will likely become more and more a part of who or what we are as human beings.
James Moor (2005) notes that computing devices will soon be a part of our clothing, and even our bodies.
Computers are already becoming ubiquitous, and are beginning to “pervade” both our work and recreational environments.
Objects in these environments already exhibit what Philip Brey (2005) calls “ambient intelligence,” which enables “smart objects” to be connected to one another via wireless technology.
The Evolution of Cybertechnology and Cyberethics (Continued)
In Phase 4, computers are becoming less visible as distinct entities, as they: (a) continue to be miniaturized and integrated into
ordinary objects,
(b) blend unobtrusively into our surroundings.
Cybertechnology is also becoming less distinguishable from other technologies as boundaries that have previously separated them begin to blur because of convergence.
Table 1-1: Summary of Four
Phases of Cyberethics
Phase Time Period Technological Features Associated Issues
1 1950s-1960s Stand-alone machines (large mainframe computers)
Artificial intelligence (AI), database privacy ("Big Brother")
2 1970s-1980s Minicomputers and PCs interconnected via privately owned networks
Issues from Phase 1 plus concerns involving intellectual property and software piracy, computer crime, privacy and the exchange of records.
3 1990s-Present Internet and World Wide Web Issues from Phases 1 and 2 plus concerns about free speech, anonymity, legal jurisdiction, virtual communities, etc.
4 Present to
Near Future
Convergence of information and communication technologies with nanotechnology research and bioinformatics research, etc.
Issues from Phases 1-3 plus concerns about artificial electronic agents ("bots") with decision-making capabilities, bionic chip implants, nanocomputing research, etc.
Are Any Cyberethics Issues Unique Ethical Issues?
Consider “The “Washingtonienne” scenario (in the textbook) involving Jessica Cutler.
The scenario raises several interesting ethical issues – from anonymity expectations to privacy concerns to free speech, etc.
But are any ethical issues raised in this scenario, or in blogging cases in general, unique ethical issues?
Are Any Cyberethics Issues Unique (Continued)?
Review the Verizon v. RIAA scenario (described in the textbook) in light of the ethical issues that arise.
The ethical issues in this scenario include concerns about privacy, anonymity, surveillance, and intellectual property rights.
Are any of these issues new or unique ethical issues?
Are Any Cyberethics Issues Unique (Continued)?
Review the Amy Boyer cyberstalking scenario (described in the textbook).
Is there anything new or unique, from an ethical point of view, about the ethical issues that emerge in this scenario?
On the one hand, Boyer was stalked in ways that were not possible in the pre-Internet era.
But are any new or any unique ethical issues generated in this scenario?
Debate about the Uniqueness of Cyberethics Issues (Continued)
There are two points of view on whether cybertechnology has generated any new or unique ethical issues: (1) Traditionalists argue that nothing is new –
crime is crime, and murder is murder.
(2) Uniqueness Proponents argue that cybertechnology has introduced (at least some) new and unique ethical issues that could not have existed before computers.
The Uniqueness Debate (Continued)
Both sides seem correct on some claims, and both seem to be wrong on others.
Traditionalists underestimate the role that issues of scale and scope that apply because of the impact of computer technology. E.g., cyberstalkers can stalk multiple victims
simultaneously (scale) and globally (because of the scope or reach of the Internet).
Cyberstalkers can also operate without ever having to leave the comfort of their homes.
The Uniqueness Debate (Continued)
Those who defend the Uniqueness thesis tend to overstate the effect that cybertechnology has on ethics per se.
Walter Maner (2004) correctly points out that computers are uniquely fast, uniquely malleable, etc.
So, there may indeed be some unique aspects of computer technology.
The Uniqueness Debate (Continued)
Proponents of the uniqueness thesis tend to confuse unique features of tcomputer echnology with unique ethical issues.
Their argument is based on a logical fallacy: Premise. Cybertechnology has some unique technological
features.
Premise. Cybertechnology generates some ethical issues. ___________________________________________________________________________________
Conclusion. (At least some of the) Ethical issues generated by cybertechnology must be unique.
The Uniqueness Debate (Continued)
Traditionalists and uniqueness advocates are each partly correct. Traditionalists correctly point out that no new
ethical issues have been introduced by computers.
Uniqueness proponents are correct in that cybertechnology has complicated our analysis of traditional ethical issues.
The Uniqueness Debate (Continued)
So we must distinguish between any: (a) unique technological features;
(b) (alleged) unique ethical issues.
Consider two scenarios described in the textbook: (1) computer professionals designing the software
code for a controversial computer system;
(2) users making unauthorized copies of software.
Alternative Strategy for Analyzing the Uniqueness Issue
James Moor (2000) argues that computer technology generates “new possibilities for human action” because computers are logically malleable. Logical malleability in computers makes possible
new kinds of behavior for humans that introduce policy vacuums.
Policy vacuums cannot easily be filled because of conceptual muddles.
Conceptual muddles need to be clarified before clear policies can be formulated and justified.
A Policy Vacuum in Duplicating Software
Review the scenario (in the textbook) involving a policy vacuum for laws affecting the duplication of software.
In the early 1980s, there were no clear laws regarding the duplication of software programs, which was made easy because of personal computers. Because there were no clear rules for copying programs, a
policy vacuum arose.
Before the policy vacuum could be filled, a conceptual muddle had to be elucidated: What exactly is software?
Cyberethics as a Branch of Applied Ethics
Applied ethics, unlike theoretical ethics, examines "practical" ethical issues.
It analyzes moral issues from the vantage-point of one or more ethical theories.
Ethicists working in fields of applied ethics are more interested in applying ethical theories to the analysis of specific moral problems than in debating the ethical theories themselves.
Cyberethics as a Branch of Applied Ethics (continued)
Three distinct perspectives of applied ethics (as applied to cyberethics):
Professional Ethics;
Philosophical Ethics;
Sociological/Descriptive Ethics.
Perspective # 1: Cyberethics as a Branch of Professional Ethics
According to this view, the purpose of cyberethics is to identify and analyze issues of ethical responsibility for computer/ information technology (IT)professionals.
Consider a computer professional's role in designing, developing, and maintaining computer hardware and software systems.
Suppose a programmer discovers that a software product she has been working on is about to be released for sale to the public, even though it is unreliable because it contains “buggy” software.
Should she “blow the whistle”?
Professional Ethics
Don Gotterbarn (1995) has suggested that computer ethics issues are professional ethics issues.
Computer ethics, for Gotterbarn, is similar to medical ethics and legal ethics, which are tied to issues involving specific professions.
He notes that computer ethics issues aren’t about technology per se. E.g., we don’t have automobile ethics, airplane
ethics, etc.
Some Criticisms of the Professional Ethics Perspective
Is Gotterbarn’s model for computer ethics too narrow for cyberethics?
Consider that cyberethics issues affect not only computer professionals; they effect evirtually veryone.
Before the widespread use of the Internet, Gotterbarn’s professional- ethics model may have been adequate.
Perspective # 2: Philosophical Ethics
From this perspective, cyberethics is a field of philosophical analysis and inquiry that goes beyond professional ethics.
Moor (2000) defines computer ethics as:
...the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. [Italics Added.]
Philosophical Ethics Perspective (continued)
Moor argues that automobile and airplane technologies did not affect our social policies and norms in the same kinds of fundamental ways that computer technology has.
Automobile and airplane technologies have revolutionized transportation, resulting in our ability to travel faster and farther than was possible in previous eras.
But they did not have the same impact on our legal and moral systems as cybertechnology.
Philosophical Ethics: Standard Model of Applied Ethics
Philip Brey (2004) describes the “standard methodology” used by philosophers in applied ethics research as having three stages: 1) Identify a particular controversial practice as a
moral problem.
2) Describe and analyze the problem by clarifying concepts and examining the factual data associated with that problem.
3)Apply moral theories and principles to reach a position about the particular moral issue.
Perspective #3: Cyberethics as a Field of Sociological/Descriptive Ethics
The professional and philosophical perspectives both illustrate normative inquiries into applied ethics issues.
Normative inquiries or studies are contrasted with descriptive studies. Descriptive investigations report about “What is
the case.“
Normative inquiries evaluate situations from the vantage-point of the question: “What ought to be the case?”.
Sociological/Descriptive Ethics Perspective (continued)
Review the scenario in the textbook re the impact of the introduction of a new technology on a community’s workforce. Suppose that a new technology, Technology X,
displaces 8,000 workers in Community Y.
If we analyze the issues solely in terms of the number of jobs that were gained or lost in that community, our investigation is essentially descriptive in nature.
Figure 1-1: Descriptive vs.
Normative Approaches
Descriptive Normative (Report or describe what is the case) (Prescribe what ought to be the case)
Non-moral Moral
Prescribe or evaluate
in matters involving
standards such as art and sports
(e.g., criteria for a good painting
or an outstanding athlete).
Prescribe or evaluate
in matters having to
do with fairness and
Obligation (e.g., criteria
for just and unjust
actions and policies).
Some Benefits of Using the Sociological/Descriptive Approach
Huff and Finholt (1994) claim that when we understand the descriptive aspect of social effects of technology, the normative ethical issues become clearer.
The descriptive perspective prepare us for our subsequent analysis of ethical issues that affect our system of policies and laws.
Table 1-2: Summary of
Cyberethics Perspectives
Type of Perspective Associated Disciplines
Issues Examined
Professional Computer Science Engineering
Library/Information Science
Professional Responsibility
System Reliability/Safety
Codes of Conduct
Philosophical Philosophy Law
Privacy & Anonymity
Intellectual Property
Free Speech
Sociological/Descriptive Sociology Behavioral Sciences
Impact of cybertechnology on governmental/financial/ educational institutions and socio-demographic groups
Is Cybertechnology Neutral?
Technology seems neutral, at least initially. Consider, for example, the cliché: “Guns don’t kill
people, people kill people.”
Corlann Gee Bush (2006) argues that gun technology, like all technologies, is biased in certain directions.
She points out that certain features inherent in gun technology itself cause guns to be biased in a direction towards violence in ways that other technologies are not.
Is Technology Neutral (continued)?
Bush uses an analogy from physics to illustrate the bias inherent in technology. E.g., she notes that when an atom either loses or
gains electrons through the ionization process, it becomes charged or valenced in a certain direction.
Bush notes that all technologies, including guns, are valenced in that they tend to "favor" certain directions rather than others.
Thus technology is biased and is not neutral.
A "Disclosive" Method for Cyberethics
Brey (2004) believes that because of embedded biases in cybertechnology, the standard applied-ethics methodology is not adequate for identifying cyberethics issues. E.g., he notes that we might fail to notice certain
features embedded in the design of cybertechnology.
Using the standard model, we might also fail to recognize that certain practices involving cybertechnology can have moral implications.
Disclosive Method (Continued)
Brey points out that one weakness of the “standard method of applied ethics” is that it tends to focus on known moral controversies
So, that model fails to identify practices involving cybertechnology which have moral implications but that are not yet known.
Brey refers to these practices as having morally opaque (or morally non-transparent) features, which he contrasts with "morally transparent” features.
Figure 1-2: Embedded Technological Features
Having Moral Implications
Known Features Unknown Features
Transparent Features Morally Opaque Features
Users are aware of
these features but do
not realize they have
moral implications.
Examples can
include:Web Forms
and search-
engine tools.
Users are not even
aware of the
technological features
that have moral
implications
Examples might
include data-mining
technology and
Internet cookies.
A Multi-Disciplinary and Multi-Level
Method for Cyberethics
Brey’s disclosive method is multidisciplinary because it requires the collaboration of: computer scientists,
philosophers,
social scientists.
A Multi-Disciplinary & Multi-Level Method
for Cyberethics (Continued)
Brey’s scheme is also multi-level because the method for conducting computer ethics research requires three levels of analysis, i.e., a: disclosure level,
theoretical level,
application level.
Table 1-3: Three Levels in Brey’s
Model of Computer Ethics
Disclosive Computer Science
Social Science (optional)
Disclose embedded features in computer technology that have moral import
Theoretical Philosophy Test newly disclosed features against standard ethical theories
Application Computer Science
Philosophy
Social Science
Apply standard or newly revised/ formulated ethical theories to the issues
Level Disciplines Involved Task/Function
A Three-step Strategy for
Approaching Cyberethics Issues
Step 1. Identify a practice involving cyber-technology, or a feature in that technology, that is controversial from
a moral perspective.
1a. Disclose any hidden (or opaque) features or issues that have moral implications
1b. If the ethical issue is descriptive, assess the sociological implications for relevant social
institutions and socio-demographic and populations.
1c. If the ethical issue is also normative, determine whether there are any specific guidelines, that
is, professional codes that can help you resolve the issue (see Appendixes A-E).
1d. If the normative ethical issues remain, go to Step 2.
Step 2. Analyze the ethical issue by clarifying concepts and situating it in a context.
2a. If a policy vacuums exists, go to Step 2b; otherwise go to Step 3.
2b. Clear up any conceptual muddles involving the policy vacuum and go to Step 3 .
Step 3. Deliberate on the ethical issue. The deliberation process requires two stages:
3a. Apply one or more ethical theories (see Chapter 2) to the analysis of the moral issue, and then
go to step 3b.
3b. Justify the position you reached by evaluating it against the rules for logic/critical thinking (see
Chapter 3).