Discussion Making Difficult Ethical Decisions
Making Difficult Ethical Decisions
Karen, a Latina lesbian, decided to come out to her family and classmates when she started her second year in her psychology doctoral program. Now that she could be her "true self" around her family and friends, she expected things to be easier. However, several people—including her parents and four siblings—are not as accepting of her as she had hoped. As a result, she struggles to keep up with schoolwork, is tearful during class, and becomes upset while in conferences with professors. Her faculty members are talking about requiring her to start psychological treatment for these issues and are even considering removing her from the program.
For this discussion, respond to the following:
What does "The Code" say about the suggestions of the faculty?
What are the cultural implications of doing what they are suggesting?
How does Fisher's ethical decision-making model, discussed in Chapter 3 of Decoding the Ethics Code: A Practical Guide, apply to this situation?
How should Karen's situation be handled?
Your initial discussion post should be at least 200 words.
Response Guidelines
Examine the work of at least two other learners. Please try to choose posts that have not yet had responses. Comment on the following:
Their perspectives on what the ethical code says about the scenario.
Their analyses of the cultural implications of the actions the faculty members are considering.
Their applications of all eight steps of the ethical decision-making model discussed in Chapter 3 of Decoding the Ethics Code: A Practical Guide.
Their resulting decisions on how to handle Karen's situation.
Your response may include comparing and contrasting several elements of the components or the outcome of your initial post.
Learning Components
This activity will help you achieve the following learning components:
Describe ethical dilemmas in psychology practice.
Identify types of cultural conflict.
Identify types of cultural differences.
Investigate professional ethical standards.
Apply ethical theory to practice.
Employ ethical decision-making models to practice.
Resources
Discussion Participation Scoring Guide.
Chapter 3 The APA Ethics Code and Ethical Decision Making
The APA’s Ethics Code provides a set of aspirational principles and behavioral rules written broadly to apply to psychologists’ varied roles and the diverse contexts in which the science and practice of psychology are conducted. The five aspirational principles described in Chapter 2 represent the core values of the discipline of psychology that guide members in recognizing in broad terms the moral rightness or wrongness of an act. As an articulation of the universal moral values intrinsic to the discipline, the aspirational principles are intended to inspire right actions but do not specify what those actions might be. The ethical standards that will be discussed in later chapters of this book are concerned with specific behaviors that reflect the application of these moral principles to the work of psychologists in specific settings and with specific populations. In their everyday activities, psychologists will find many instances in which familiarity with and adherence to specific Ethical Standards provide adequate foundation for ethical actions. There will also be many instances in which (a) the means by which to comply with a standard are not readily apparent, (b) two seemingly competing standards appear equally appropriate, (c) application of a single standard or set of standards appears consistent with one aspirational principle but inconsistent with another, or (d) a judgment is required to determine whether exemption criteria for a particular standard are met.
The Ethics Code is not a formula for solving these ethical challenges. Psychologists are not moral technocrats simply working their way through a decision tree of ethical rules. Rather, the Ethics Code provides psychologists with a set of aspirations and broad general rules of conduct that psychologists must interpret and apply as a function of the unique scientific and professional roles and relationships in which they are embedded. Successful application of the principles and standards of the Ethics Code involves a conception of psychologists as active moral agents committed to the good and just practice and science of psychology. Ethical decision making thus involves a commitment to applying the Ethics Code and other legal and professional standards to construct rather than simply discover solutions to ethical quandaries (APA, 2012f).
This chapter discusses the ethical attitudes and decision-making strategies that can help psychologists prepare for, identify, and resolve ethical challenges as they continuously emerge and evolve in the dynamic discipline of psychology. An opportunity to apply these strategies is provided in the cases at the end of each chapter and the 10 case studies presented in Appendix A.
Ethical Commitment and Virtues
The development of a dynamic set of ethical standards for psychologists’ work-related conduct requires a personal commitment and lifelong effort to act ethically; to encourage ethical behavior by students, supervisees, employees, and colleagues; and to consult with others concerning ethical problems.
—APA (2010b, Preamble)
Ethical commitment refers to a strong desire to do what is right because it is right (Josephson Institute of Ethics, 1999). In psychology, this commitment reflects a moral disposition and emotional responsiveness that move psychologists to creatively apply the APA’s Ethics Code principles and standards to the unique ethical demands of the scientific or professional context.
The desire to do the right thing has often been associated with moral virtues or moral character, defined as a disposition to act and feel in accordance with moral principles, obligations, and ideals—a disposition that is neither principle bound nor situation specific (Beauchamp & Childress, 2001; MacIntyre, 1984). Virtues are dispositional habits acquired through social nurturance and professional education that provide psychologists with the motivation and skills necessary to apply the ideals and standards of the profession (see, e.g., Hauerwas, 1981; Jordan & Meara, 1990; May, 1984; National Academy of Sciences, 1995; Pellegrino, 1995). Fowers (2012) described virtues as the cognitive, emotional, dispositional, behavioral, and wisdom aspects of character strength, which motivates and enables us to act ethically out of an attachment to what is good.
Focal Virtues for Psychology
Virtue ethics can provide psychologists a more personal and therefore more effective foundation from which to approach ethical issues, and it helps offset an overreliance on conformity to rules that may be inconsistent with the aspirational principles of the discipline (Anderson & Handelsman, 2013; Kitchener & Anderson, 2011). Many moral dispositions have been proposed for the virtuous professional (Beauchamp & Childress, 2001; Keenan, 1995; MacIntyre, 1984; May, 1984). For disciplines such as psychology, in which codes of conduct dictate the general parameters but not the context-specific nature of ethical conduct, conscientiousness, discernment, and prudence are requisite virtues.
· A conscientious psychologist is motivated to do what is right because it is right, diligently tries to determine what is right, makes reasonable attempts to do the right thing, and is committed to lifelong professional growth.
· A discerning psychologist brings contextually and relationally sensitive insight, good judgment, and appropriately detached understanding to determine what is right.
· A prudent psychologist applies practical wisdom to ethical challenges, leading to right solutions that can be realized given the nature of the problem and the individuals involved.
The virtues considered most salient by members of a profession will vary with differences in role responsibilities. The asymmetrical power relationship and the client’s/patient’s vulnerability in the provision of mental health services requires virtues of benevolence, care, empathy, emotional self-restraint and monitoring, and compassion (Ivey, 2014). Prudence, discretion, and trustworthiness have been considered salient in scientific decision making. Scientists who willingly and consistently report procedures and findings accurately are enacting the virtue of honesty (Fowers, 2012). Fidelity, integrity, and wisdom are moral characteristics frequently associated with teaching and consultation. The Standards for Forensic Psychology (APA, 2013e) encourages forensic practitioners to act with reasonable diligence and promptness in managing their workloads so they can provide agreed upon and reasonably anticipated services across all work activities. The virtue of self-care enables psychologists to maintain appropriate competencies under stressful work conditions (see the Hot Topic “The Ethical Component of Self-Care” at the end of this chapter).
Openness to Others
“Openness to the other” has been identified as a core virtue for the practice of multiculturalism (Fowers & Davidov, 2006). Openness is characterized by a personal and professional commitment to applying a multicultural lens to our work motivated by a genuine interest in understanding others rather than reacting to a new wave of multicultural “shoulds” (Gallardo, Johnson, Parham, & Carter, 2009). It reflects a strong desire to understand how culture is relevant to the identification and resolution of ethical challenges in research and practice, to explore cultural differences, to respond to fluid definitions of group characteristics, to recognize the realities of institutional racism and other forms of discrimination on personal identity and life opportunities, and to creatively apply the profession’s ethical principles and standards to each cultural context (Aronson, 2006; Fisher, 2015; Fowers & Davidov, 2006; Hamilton & Mahalik, 2009; Neumark, 2009; Riggle, Rostosky, & Horne, 2010; Sue & Sue, 2003; Trimble, 2009; Trimble & Fisher, 2006).
Openness may also be a core virtue for practicing in the primary care interprofessional organizations created by the Affordable Care Act, where the psychologists’ role extends beyond providing patient services to include making contributions to integrated teams of health care professionals. Nash et al. (2013) have proposed a “primary care ethic” that reflects a guiding philosophy or set of values characterized by openness, appreciation, and willingness to engage as a psychologist in the interprofessional primary care environment. It reflects (a) a respect and appreciation for contributions by professionals from other disciplines; (b) a desire to integrate disciplinary perspectives; (c) a valuing of collaborative relationships and a willingness to cultivate and maintain them; and (d) a willingness to initiate clear, open, and constructive interprofessional communication.
Can Virtues Be Taught?
No course could automatically close the gap between knowing what is right and doing it.
—Pellegrino (1989, p. 492)
Some have argued that psychology professors cannot change graduate students’ moral character through classroom teaching and therefore ethics education should focus on understanding the Ethics Code rather than instilling moral dispositions to right action. Without question, however, senior members of the discipline, through teaching and through their own examples, can enhance the ability of students and young professionals to understand the centrality of ethical commitment to ethical practice. At the same time, the development of professional moral character is not to simply know about virtue but to become good (Scott, 2003). Beyond the intellectual virtues transmitted in the classroom and modeled through mentoring and supervision, excellence of character can be acquired through habitual practice (Begley, 2006). One such habit for the virtuous graduate student and seasoned psychologist is a commitment to lifelong learning and practice in the continued development of moral excellence.
Some moral dispositions can be understood as derivative of their corresponding principles (Beauchamp & Childress, 2001). Drawing on the five APA General Principles, Table 3.1 lists corresponding virtues.
Ethical Awareness and Moral Principles
In the process of making decisions regarding their professional behavior, psychologists must consider this Ethics Code, in addition to applicable laws and psychology board regulations.
—APA (2010b, Introduction)
Lack of awareness or misunderstanding of an ethical standard is not itself a defense to a charge of unethical conduct.
—APA (2010b, Introduction)
Ethical commitment is just the first step in effective ethical decision making. Good intentions are insufficient if psychologists fail to identify the ethical situations to which they should be applied. Psychologists found to have violated Ethical Standards or licensure regulations have too often harmed others or damaged their own careers or the careers of others because of ethical ignorance. Conscientious psychologists understand that identification of situations requiring ethical attention depends on familiarity and understanding of the APA Ethics Code, relevant scientific and professional guidelines, laws and regulations applicable to their specific work-related activities, and an awareness of relational obligations embedded within each context.
Moral Principles and Ethical Awareness
To identify a situation as warranting ethical consideration, psychologists must be aware of the moral values of the discipline. Although the Ethics Code’s General Principles are not exhaustive, they do identify the major moral ideals of psychology as a field. Familiarity with the General Principles, however, is not sufficient for good ethical decision making. Psychologists also need the knowledge, motivation, and coping skills to detect when situations call for consideration of these principles and attempt to address these issues when and if possible before they arise (Crowley & Gottlieb, 2012; Tjeltveit & Gottlieb, 2010; see also the Hot Topic “The Ethical Component of Self-Care” at the end of this chapter). Table 3.1 identifies types of ethical awareness corresponding to each General Principle.
Ethical Awareness and Ethical Theories
Ethical theories provide a moral framework to reflect on conflicting obligations. Unfortunately, ethical theories tend to emphasize one idea as the foundation for moral decision making, and illustrative problems are often reduced to that one idea. Given the complexity of moral reality, these frameworks are probably not mutually exclusive in their claims to moral truth (Steinbock, Arras, & London, 2003). However, awareness of the moral frameworks that might help address an ethical concern can also help clarify the values and available ethical choices (Beauchamp & Childress, 2001; Fisher, 1999; Kitchener, 1984).
Deception Research: A Case Example for the Application of Different Ethical Theories
Since Stanley Milgram (1963) published his well-known obedience experiments, the use of deception has become normative practice in some fields of psychological research and a frequent source of ethical debate (Baumrind, 1964, 1985; Fisher & Fyrberg, 1994). Researchers using deceptive techniques intentionally withhold information or misinform participants about the purpose of the study, the methodology, or roles of research confederates (Sieber, 1982). Deception is still widely practiced within experimental social psychology and in sexual health behavior and health care research (Kirschner et al., 2010; Miller, Gluck, & Wendler, 2008; Wong et al., 2012). By its very nature, the use of deception in research creates what Fisher (2005a) has termed the consent paradox: obtaining ‘informed consent’ under conditions in which participants are not truly informed.
On the one hand, intentionally deceiving participants about the nature and purpose of a study conflicts with Principle C: Integrity and Principle E: Respect for People’s Rights and Dignity and with enforceable standards requiring psychologists to obtain fully informed consent of research participants prior to study initiation (Standards 3.10, Informed Consent; 8.02, Informed Consent to Research; 9.03, Informed Consent in Assessments; 10.01, Informed Consent to Therapy).
On the other hand, the methodological rationale for the use of deception is that some psychological phenomena cannot be adequately understood if research participants are aware of the purpose of the study. Thus by approximating the naturalistic contexts in which everyday behaviors take place, deception research can reflect Principle A: Beneficence and Nonmaleficence and Principle B: Fidelity and Responsibility by enhancing the ability of psychologists to generate scientifically and socially useful knowledge that might not otherwise be obtained. For example, deception has been used to study the phenomenon of “bystander apathy effect,” the tendency for people in the presence of others to observe but not help a person who is a victim of an attack, medical emergency, or other dangerous condition (Latane & Darley, 1970). In such experiments, false emergency situations are staged without the knowledge of the research participants, whose reactions to the “emergency” are recorded and analyzed.
Standard 8.07, Deception in Research (as well as federal regulations governing participant protections) permits deception under limited conditions. However, its use remains ethically controversial. Below we present a case example of a deception study with discussion of how different ethical theories might lead to different conclusions about the moral acceptability of deceptive research. Readers should refer to Chapter 11 for a more in-depth discussion of Standard 8.07, Deception in Research.
Case Example
The Gaffe Study (Gonzales, Pederson, Manning, & Wetter, 1990)
This experiment was conducted to examine whether undergraduate males and females differ in their explanations for an embarrassing incident and whether the severity of their mistake would influence their explanations. Undergraduate students were “invited” to help researchers develop a video for a future study on how people form impressions. Each student participated in a taped discussion with another student in which they either were interviewed or were the interviewer. They were not told the true purpose of the study or that the other “student” was actually a confederate of the research team. Participants were then told to place their belongings on a table. As they did so, the experimenter pulled a hidden string attached to a strategically placed cup of colored water, which spilled onto what appeared to be the confederate’s bag. For half the participants, only papers were in the tote bag (low-severity incident) while for the other half an expensive camera was in the tote bag (high-severity incident). Immediately after the cup spilled, the confederate exclaimed, “Oh no, my stuff!” followed by “What happened?” The experimenter had turned on the video so that participants’ nonverbal responses (e.g., hand to face, head shaking), instrumental behaviors (e.g., attempts to empty the bag), and verbal responses (e.g., “I’m sorry” or “I didn’t do it”) could be analyzed. See Fisher and Fyrberg (1994) to learn how introductory students evaluated the ethics of this study.
Ethical Theories
Deontology or Kantian Ethics
Deontology has been described as “absolutist,” “universal,” and “impersonal” (Kant, 1785/1959). It prioritizes absolute obligations over consequences. In this moral framework, ethical decision making is the rational act of applying universal principles to all situations irrespective of specific relationships, contexts, or consequences. This approach reflects Immanuel Kant’s conviction that ethical decisions cannot vary or be influenced by special circumstances or relationships. Rather, Kant stipulated that an ethical decision is only morally justified if a rational person believes the act resulting from the decision should be universally followed in all situations. This is called the categorical imperative. For Kant, respect for the worth of all persons was one such universal principle. A course of action that results in a person being used simply as a means for others’ gains would be ethically unacceptable.
With respect to deception in research, from a deontological perspective, since we would not believe it moral to intentionally deceive individuals across a variety of other contexts, neither the potential benefits to society nor the effectiveness of participant debriefing (informing participants about the true nature of the study after their participation is completed) for a particular deception study can morally justify intentionally deceiving persons about the purpose or nature of the study. Further, from a Kantian perspective, deception in research is not ethically permissible, since intentionally disguising the nature of the study for the goals of research violates the moral obligation to respect each participant’s intrinsic worth by undermining that individual’s right to make rational and autonomous informed consent decisions regarding participation (Fisher & Fyrberg, 1994).
Utilitarianism or Consequentialism
Utilitarian theory prioritizes the consequences (or utility) of an act over the application of universal principles (Mill, 1861/1957). From this perspective, an ethical decision is situation specific and must be governed by a risk–benefit calculus that determines which act will produce the greatest possible balance of good over bad consequences. An “act utilitarian” makes an ethical decision by evaluating the consequences of an act for a given situation. A “rule utilitarian” makes an ethical decision by evaluating whether following a general rule in all similar situations would create the greater good. Like deontology, utilitarianism is impersonal: It does not take into account interpersonal and relational features of ethical responsibility. From this perspective, psychologists’ obligations to those with whom they work can be superseded by an action that would produce a greater good for others.
A psychologist adhering to act utilitarianism might decide that the potential knowledge about social behavior during an embarrassing situation generated by this deception study could produce benefits for many members of society, thereby justifying the minimal risk of harm that the embarrassment might cause and the violation of autonomy rights based on the absence of true informed consent for only a few research participants. A rule utilitarian might decide against the use of deception in all research studies because the unknown benefits to society do not outweigh the potential harm to the discipline of psychology if society began to see it as an untrustworthy science.
Communitarianism
Communitarian theory assumes that right actions derive from community values, goals, traditions, and cooperative virtues. It considers the common good, community values and goals, and cooperative virtues as fundamental to ethical decision making (MacIntyre, 1989; Melchert, 2015; Walzer, 1983). Communitarianism is often contrasted with liberal individualism, an ethical theory that privileges the individual over the group and identifies individual autonomy, privacy, property, free speech, and freedom of religion as the cornerstones of a civil society, thus elevating individual over group rights (Beauchamp & Childress, 2001; Dworkin, 1977). Although all forms of communitarianism support ethical decisions that improve the health and welfare of members of the community, some forms value group welfare over individual rights and reject the deontological categorical imperative that ethical decisions have universal application across different communities.
Whereas utilitarianism asks whether a policy will produce the greatest good for all individuals in society, communitarianism asks whether a policy will promote the kind of community we want to live in (Steinbock et al., 2003). For example, from a communitarian perspective, the competent practice of psychology cannot be defined simply in terms of individual interpretations of ethical standards but rather must be consistently evaluated and affirmed through interdependent and communal dialogue and support among members of the field (Johnson, Barnett, Elman, Forrest, & Kaslow, 2013).
The challenge to a communitarian perspective is the question of which community values should be represented in ethical decision making. Drawing on the principle of justice, Fisher and her colleagues have argued that the values of a majority may not reflect the needs or values of a more vulnerable minority within a community. For this reason, scientific, intervention, or policy decisions made in response to majority values may result in or perpetuate health disparities and other inequities suffered by marginal groups (Fisher, 1999, 2011; Fisher et al., 2002; Fisher & Wallace, 2000). For example, sensitivity to “who is the community” is particularly important when psychologists are consulting with community “representatives” in the design and evaluation of social or educational programs. Restricting consultation to community leaders and program administrators may result in programs that fail to adequately serve the members most in need.
Research psychologists who believe deception research is ethically justified can be conceived as members of a scientific community of shared values that has traditionally assumed (a) the pursuit of knowledge is a universal good, (b) the results of deception research are intrinsically valuable, and (c) consideration for the practical consequences of research will inhibit scientific progress (Fisher, 1999; Sarason, 1984; Scarr, 1988). The historical salience of these shared values may be implicitly reflected, at least in part, in the acceptance of deception research in the APA Ethics Code (Standard 8.07, Deception in Research) and in current federal regulations (Department of Health and Human Services [DHHS], 2009). However, little is known about the extent to which the “community of research participants” shares the scientific community’s valuing of deception methods. The participant community may instead place greater value on their right to determine whether they will be exposed to specific research risks and benefits and on society’s need to perceive scientists as members of a trustworthy profession.
Relational Ethics
Relational ethics, originating out of feminist ethics or an ethics of care, sees a commitment to act on behalf of persons with whom one has a significant relationship as central to ethical decision making. This moral theory rejects the primacy of universal values of deontology and the cost–benefit calculus of utilitarianism in favor of relationally specific obligations (Baier, 1985; Brabeck, 2000; Fisher, 1999, 2000, 2004). It also rejects communitarianism’s emphasis on group norms and instead stresses the importance of the uniqueness of individuals embedded in relationships. Relational ethics focuses our attention on power imbalances and supports efforts to promote equality of power and opportunity for women and other marginalized groups (Brabeck & Brabeck, 2012; Sechzer & Rabinowitz, 2008). It underscores the value of understanding the point of view, needs, and expectations of clients/patients, research participants, and others as a means of enhancing psychologists’ own moral development and ethical decision making (Fisher, 2000; Noddings, 1984).
In relational ethics, responsiveness to research participants and psychologists’ awareness of their own boundaries, competencies, and obligations are the foundation of ethics-in-science decision-making (Fisher, 1999, 2002a, 2004, 2011). From a relational perspective, in the absence of dialogue with prospective participants, the psychologists designing the “Gaffe” study, by virtue of their training and institutional positions, may have overestimated the scientific validity and value of the study and underestimated undergraduates’ stress, discomfort, and sense of disempowerment during the study and following debriefing (Fisher & Fyrberg, 1994). Thus, relational ethics would view this study as a violation of investigators’ obligations of interpersonal trust to participants and as reinforcing power inequities by permitting faculty members to deprive undergraduates of information that might affect their decision to participate.
Ethical Absolutism, Ethical Relativism, and Ethical Contextualism
Psychologists with high levels of ethical commitment and awareness are often stymied by moral complexities that surface when individuals or cultural communities with whom they work hold values that are or appear to be distinctly different from the Ethics Code aspirational principles, contrary to evidence-based “right” clinical outcomes, or inconsistent with federal regulations and professional guidelines for protecting the rights and welfare of research participants. Such dilemmas can be framed in three different ways.
The first, termed “ethical absolutism,” adopts the universal perspective of the deontic position and rejects the influence of culture on the identification and resolution of ethical problems in a manner that can lead to a one-size-fits-all form of ethical problem solving. However, psychologists who adopt an absolutist stance misconceive the discipline of psychology as an impartial helping or scientific profession whose values and techniques are universally related to the essential humanity of those with whom we work (Fisher, 1999; Koenig & Richeson, 2010). For example, drawing on Principle C, Integrity, a psychologist who has learned that a child client has a genetic marker for a serious adult onset disorder may believe it is his ethical duty to share this information with the child, without considering other moral positions, including the child’s right to have one’s future options kept open until one is old enough to make one’s own life choices (Millum, 2014).
In sharp contrast, “ethical relativism,” often associated with some forms of utilitarianism and communitarianism, denies the existence of universal or common moral values characterizing the whole of human relationships, proposing instead that how ethical problems are identified and resolved is unique to each particular culture or community. This can result in confusing what “is” for what “ought” to be (Melchert, 2015). For example, this stance runs the risk of condoning client or organizational behaviors, beliefs, and attitudes that reflect systemic cultural injustices or cultural values such as racism, heterosexism, or misogyny that are iatrogenic to a client’s mental health or the well-being of employees or those whom organizations serve (Cassidy, 2013; Fisher, 2014; Knapp & VandeCreek, 2007).
Ethical contextualism, variously known as cross-cultural ethics or moral realism, blends the two approaches and assumes that moral principles such as beneficence, integrity, social justice, and respect for people’s rights and dignity are or should be universally valued across diverse contexts and cultures, but the expression of an ethical problem and the right actions to resolve it can be unique to the cultural context (Fisher, 1999, 2000, 2014; Korchin, 1980; Macklin, 1999; Melchert, 2015). This position is reflected in the Universal Declaration of Ethical Principles for Psychologists (International Union of Psychological Science, 2008), which includes an articulation of ethical contextualism in its recognition that these value principles may be expressed in different ways in different communities and cultures and that respect for different customs and beliefs should be limited only when they seriously contravene “the dignity of persons or peoples or causes serious harm to their well-being.” Consistent with the relational or feminist ethics framework, psychologists taking a contextual stance are motivated to understand how ethical values may be differentially expressed across different cultural contexts and to identify when group acceptance of a norm is inconsistent with a basic universal morality.
Case Example