Chater 5-
ETHICAL EGOISM Ethical egoism is the theory that the right action is the one that advances one’s own best interests. It is a provocative doctrine, in part because it forces us to consider two opposing attitudes in ourselves. On the one hand, we tend to view selfish or flagrantly self-interested behavior as wicked, or at least troubling. Self-love is bad love. We frown on people who trample others in life to get to the head of the line. On the other hand, sometimes we want to look out for number one, to give priority to our own needs and desires. We think, If we do not help ourselves, who will? Self-love is good love. Ethical egoism says that one’s only moral duty is to promote the most favorable balance of good over evil for oneself. Each person must put his or her own welfare first. Advancing the interests of others is part of this moral equation only if it helps promote one’s own good. Yet this extreme selfinterest is not necessarily selfishness. Selfish acts advance one’s own interests regardless of how others are affected. Self-interested acts promote one’s own interests but not necessarily to the detriment of others. To further your own interests you may actually find yourself helping others. To gain some advantage, you may perform actions that are decidedly unselfish. Just as we cannot equate ethical egoism with selfishness, neither can we assume it is synonymous with self-indulgence or recklessness. An ethical egoist does not necessarily do whatever she desires to do or whatever gives her the most immediate pleasure. She does what is in her best interests, and instant gratification may not be in her best interests. She may want to spend all her money at the casino or work eighteen hours a day, but over the long haul doing so may be disastrous for her. Even ethical egoists have to consider the long-term effects of their actions. They also have to take into account their interactions with others. At least most of the time, egoists are probably better off if they cooperate with others, develop reciprocal relationships, and avoid actions that antagonize people in their community or society. Ethical egoism comes in two forms—one applying the doctrine to individual acts and one to relevant rules. Act-egoism says that to determine right action, you must apply the egoistic principle to individual acts. Act A is preferable to Act B because it promotes your self-interest better. Rule-egoism says that to determine right action, you must see if an act falls under a rule that if consistently followed would maximize your self-interest. Act A is preferable to Act B because it falls under a rule that maximizes your self-interest better than any other relevant rule applying to Act B. An ethical egoist can define self-interest in various ways. The Greek philosopher Epicurus (341–270 B.C.E.), a famous ethical egoist from whose name we derive the words epicure and epicurean, gave a hedonist answer: The greatest good is pleasure, and the greatest evil, pain. The duty of a good ethical egoist is to maximize pleasure for oneself. (Contrary to legend, Epicurus thought that wanton overindulgence in the delights of the senses was not in one’s best interests. He insisted that the best pleasures were those of the contemplative life and that extravagant pleasures such as drunkenness and gluttony eventually lead to misery.) Other egoistic notions of the greatest good include self-actualization (fulfilling one’s potential), security and material success, satisfaction of desires, acquisition of power, and the experience of happiness. To many people, ethical egoism may sound alien, especially if they have heard all their lives about the noble virtue of altruism and the evils of self-centeredness. But consider that self-interest is a pillar on which the economic system of capitalism is built. In a capitalist system, self-interest is supposed to drive people to seek advantages for themselves in the marketplace, compelling them to compete against each other to build a better mousetrap at a lower price. Economists argue that the result of this clash of self-interests is a better, more prosperous society. Applying the Theory Suppose Rosa is a successful executive at a large media corporation, and she has her eye on a vice president’s position, which has just become vacant. Vincent, another successful executive in the company, also wants the VP job. Management wants to fill the vacancy as soon as possible, and they are trying to decide between the two most qualified candidates—Rosa and Vincent. One day Rosa discovers some documents left near a photocopier and quickly realizes that they belong to Vincent. One of them is an old memo from the president of a company where Vincent used to work. In it, the president lambastes Vincent for botching an important company project. Rosa knows that despite what she reads in the memo, Vincent has had an exemplary professional career in which he has managed most of his projects extremely well. In fact, she believes that the two of them are about equal in professional skills and accomplishments. She also knows that if management saw the memo, they would almost certainly choose her over Vincent for the VP position. She figures that Vincent probably left the documents there by mistake and would soon return to retrieve them. Impulsively, she makes a copy of the memo for herself. Now she is confronted with a moral choice. Let us suppose that she has only three options. First, she can destroy her copy of the memo and forget about the whole incident. Second, she can discredit Vincent by showing it to management, thereby securing the VP slot for herself. Third, she can achieve the same result by discrediting Vincent surreptitiously: she can simply leave a copy where management is sure to discover it. Let us also assume that she is an act-egoist who defines her self-interest as self-actualization. Self-actualization for her means developing into the most powerful, most highly respected executive in her profession while maximizing the virtues of loyalty and honesty. So by the lights of her act-egoism what should Rosa do? Which choice is in her best interests? Option one is neutral regarding her self-interest. If she destroys her copy of the memo, she will neither gain nor lose an advantage for herself. Option two is more complicated. If she overtly discredits Vincent, she will probably land the VP spot—a feat that fits nicely with her desire to become a powerful executive. But such a barefaced sabotaging of someone else’s career would likely trouble management, and their loss of some respect for Rosa would impede future advancement in her career. They may also come to distrust her. Rosa’s backstabbing would also probably erode the trust and respect of her subordinates (those who report to her). If so, their performance may suffer, and any deficiencies in Rosa’s subordinates would reflect on her leadership skills. Over time, she may be able to regain the respect of management through dazzling successes in her field, but the respect and trust of others may be much harder to regain. Option two involves the unauthorized, deceitful use of personal information against another person—not an action that encourages the virtue of honesty in Rosa. In fact, her dishonesty may weaken her moral resolve and make similar acts of deceit more probable. Like option two, option three would likely secure the VP job for Rosa. But because the deed is surreptitious, it would probably not diminish the respect and trust of others. There is a low probability, however, that Rosa’s secret would eventually be uncovered—especially if Vincent suspects Rosa, which is likely. If she is found out, the damage done to her reputation (and possibly her career) might be greater than that caused by the more up-front tactic of option two. Also like option two, option three might weaken the virtue of honesty in Rosa’s character. Given this situation and Rosa’s brand of act-egoism, she should probably go with option three—but only if the risk of being found out is extremely low. Option three promotes her selfinterest dramatically by securing the coveted job at a relatively low cost (a possible erosion of virtue). Option two would also land the job but at very high cost—a loss of other people’s trust and respect, a possible decrease in her chances for career advancement, damage to her professional reputation, and a likely lessening of a virtue critical to Rosa’s self-actualization (honesty). If Rosa believes that the risks to her career and character involved in options two and three are too high, she should probably choose option one. This choice would not promote her best interests, but it would not diminish them either. Would Rosa’s action be any different if judged from the perspective of rule-egoism? Suppose Rosa, like many other ethical egoists, thinks that her actions should be guided by this rule (or something like it): People should be honest in their dealings with others—that is, except in insignificant matters (white lies), they should not lie to others or mislead them. She believes that adhering to this prohibition against dishonesty is in her best interests. The rule, however, would disallow both options two and three, for they involve significant deception. Only option one would be left. But if obeying the rule would lead to a major setback for her interests, Rosa might decide to ignore it in this case (or reject it altogether as contrary to the spirit of ethical egoism). If so, she might have to fall back to act-egoism and decide in favor of option three. Evaluating the Theory Is ethical egoism a plausible moral theory? Let us find out by examining arguments in its favor and applying the moral criteria of adequacy. The primary argument for ethical egoism depends heavily on a scientific theory known as psychological egoism, the view that the motive for all our actions is self-interest. Whatever we do, we do because we want to promote our own welfare. Psychological egoism, we are told, is simply a description of the true nature of our motivations. We are, in short, born to look out for number one. Putting psychological egoism to good use, the ethical egoist reasons as follows: We can never be morally obligated to perform an action that we cannot possibly do. This is just an obvious fact about morality. Since we are not able to prevent a hurricane from blasting across a coastal city, we are not morally obligated to prevent it. Likewise, since we are not able to perform an action except out of self-interest (the claim of psychological egoism), we are not morally obligated to perform an action unless motivated by self-interest. That is, we are morally obligated to do only what our selfinterest motivates us to do. Here is the argument stated more formally: 1. We are not able to perform an action except out of self-interest (psychological egoism). 2. We are not morally obligated to perform an action unless motivated by self-interest. 3. Therefore, we are morally obligated to do only what our self-interest motivates us to do. Notice that even if psychological egoism is true, this argument does not establish that an action is right if and only if it promotes one’s selfinterest (the claim of ethical egoism). But it does demonstrate that an action cannot be right unless it at least promotes one’s self-interest. To put it another way, an action that does not advance one’s own welfare cannot be right. Is psychological egoism true? Many people think it is and offer several arguments in its favor. One line of reasoning is that psychological egoism is true because experience shows that all our actions are in fact motivated by self-interest. All our actions—including seemingly altruistic ones—are performed to gain some benefit for ourselves. This argument, however, is far from conclusive. Sometimes people do perform altruistic acts because doing so is in their best interests. Smith may contribute to charity because such generosity furthers his political ambitions. Jones may do volunteer work for the Red Cross because it looks good on her résumé. But people also seem to do things that are not motivated by self-interest. They sometimes risk their lives by rushing into a burning building to rescue a complete stranger. They may impair their health by donating a kidney to prevent one of their children from dying. Explanations that appeal to self-interest in such cases seem implausible. Moreover, people often have self-destructive habits (for example, drinking excessively and driving reck-lessly)—habits that are unlikely to be in anyone’s best interests. Some ethical egoists may argue in a slightly different vein: People get satisfaction (or happiness or pleasure) from what they do, including their so-called unselfish or altruistic acts. Therefore, they perform unselfish or altruistic actions because doing so gives them satisfaction. A man saves a child from a burning building because he wants the emotional satisfaction that comes from saving a life. Our actions, no matter how we characterize them, are all about self-interest. This argument is based on a conceptual confusion. It says that we perform selfless acts to achieve satisfaction. Satisfaction is the object of the whole exercise. But if we experience satisfaction in performing an action, that does not show that our goal in performing the action is satisfaction. A much more plausible account is that we desire something other than satisfaction and then experience satisfaction as a result of getting what we desired. Consider, for example, our man who saves the child from a fire. He rescues the child and feels satisfaction—but he could not have experienced that satisfaction unless he already had a desire to save the child or cared what happened to her. If he did not have such a desire or care about her, how could he have derived any satisfaction from his actions? To experience satisfaction he had to have a desire for something other than his own satisfaction. The moral of the story is that satisfaction is the result of getting what we want—not the object of our desires. This view fits well with our own experience. Most often when we act according to some purpose, we are not focused on, or aware of, our satisfaction. We concentrate on obtaining the real object of our efforts, and when we succeed, we then feel satisfaction. The philosopher Joel Feinberg makes a similar point about the pursuit of happiness. He asks us to imagine a person, Jones, who has no desire for much of anything—except happiness. Jones has no interest in knowledge for its own sake, the beauty of nature, art and literature, sports, crafts, or business. But Jones does have “an overwhelming passion for, a complete preoccupation with, his own happiness. The one desire of his life is to be happy.”1 The irony is that using this approach, Jones will not find happiness. He cannot pursue happiness directly and expect to find it. To achieve happiness, he must pursue other aims whose pursuit yields happiness as a by-product. We must conclude that it is not the case that our only motivation for our actions is the desire for happiness (or satisfaction or pleasure). Can Ethical Egoism Be Advocated? Some critics of ethical egoism say that it is a very strange theory because its adherents cannot urge others to become ethical egoists! The philosopher Theodore Schick Jr. makes the point: Even if ethical egoism did provide necessary and sufficient conditions for an action’s being right, it would be a peculiar sort of ethical theory, for its adherents couldn’t consistently advocate it. Suppose that someone came to an ethical egoist for moral advice. If the ethical egoist wanted to do what is in his best interest, he would not tell his client to do what is in her best interest because her interests might conflict with his. Rather, he would tell her to do what is in his best interest. Such advice has been satirized on national TV. Al Franken, a former writer for Saturday Night Live and author of Rush Limbaugh Is a Big Fat Idiot and Other Observations, proclaimed on a number of Saturday Night Live shows in the early 1980s that whereas the 1970s were known as the “me” decade, the 1980s were going to be known as the “Al Franken” decade. So whenever anyone was faced with a difficult decision, the individual should ask herself, “How can I most benefit Al Franken?”* *Theodore Schick Jr., in Doing Philosophy: An Introduction through Thought Experiments, by Schick and Lewis Vaughn, 2nd ed. (Boston: McGraw-Hill, 2003), 327. These reflections show that psychological egoism is a dubious theory, and if we construe selfinterest as satisfaction, pleasure, or happiness, the theory seems false. Still, some may not give up the argument from experience (mentioned earlier), insisting that when properly interpreted, all our actions (including those that seem purely altruistic or unselfish) can be shown to be motivated by self-interest. All the counterexamples that seem to suggest that psychological egoism is false actually are evidence that it is true. Smith’s contributing to charity may look altruistic, but he is really trying to impress a woman he would like to date. Jones’s volunteer work at the Red Cross may seem unselfish, but she is just trying to cultivate some business contacts. Every counterexample can be reinterpreted to support the theory. Critics have been quick to charge that this way of defending psychological egoism is a mistake. It renders the theory untestable and useless. It ensures that no evidence could possibly count against it, and therefore it does not tell us anything about self-interested actions. Anything we say about such actions would be consistent with the theory. Any theory that is so uninformative could not be used to support another theory—including ethical egoism. So far we have found the arguments for ethical egoism ineffective. Now we can ask another question: Are there any good arguments against ethical egoism? This is where the moral criteria of adequacy come in. Recall that an important first step in evaluating a moral theory (or any other kind of theory) is to determine if it meets the minimum requirement of coherence, or internal consistency. As it turns out, some critics of ethical egoism have brought the charge of logical or practical inconsistency against the theory. But in general these criticisms seem to fall short of a knockout blow to ethical egoism. Devising counterarguments that can undercut the criticisms seems to be a straightforward business. Let us assume, then, that ethical egoism is in fact eligible for evaluation using the criteria of adequacy. We begin with Criterion 1, consistency with considered judgments. A major criticism of ethical egoism is that it is not consistent with many of our considered moral judgments—judgments that seem highly plausible and commonsensical. Specifically, ethical egoism seems to sanction actions that we would surely regard as abominable. Suppose a young man visits his elderly, bedridden father. When he sees that no one else is around, he uses a pillow to smother the old man in order to collect on his life insurance. Suppose also that the action is in the son’s best interests; it will cause not the least bit of unpleasant feelings in him; and the crime will remain his own terrible secret. According to ethical egoism, this heinous act is morally right. The son did his duty. An ethical egoist might object to this line by saying that refraining from committing evil acts is actually endorsed by ethical egoism—one’s best interests are served by refraining. You should not murder or steal, for example, because it might encourage others to do the same to you, or it might undermine trust, security, or cooperation in society, which would not be in your best interests. For these reasons, you should obey the law or the rules of conventional morality (as the rule-egoist might do). But following the rules is clearly not always in one’s best interests. Sometimes committing a wicked act really does promote one’s own welfare. In the case of the murdering son, no one will seek revenge for the secret murder, cooperation and trust in society will not be affected, and the murderer will suffer no psychological torments. There seems to be no downside here—but the son’s rewards for committing the deed will be great. Consistently looking out for one’s own welfare sometimes requires rule violations and exceptions. In fact, some argue that the interests of ethical egoists may be best served when they urge everyone else to obey the rules while they themselves secretly break them. If ethical egoism does conflict with our considered judgments, it is questionable at best. But it has been accused of another defect as well: it fails Criterion 2, consistency with our moral experiences. One aspect of morality is so fundamental that we may plausibly view it as a basic fact of the moral life: moral impartiality, or treating equals equally. We know that in our dealings with the world, we are supposed to take into account the treatment of others as well as that of ourselves. The moral life is lived with the wider world in mind. We must give all persons their due and treat all equals equally, for in the moral sense we are all equals. Each person is presumed to have the same rights—and to have interests that are just as important—as everyone else, unless we have good reason for thinking otherwise. If one person is qualified for a job, and another person is equally qualified, we would be guilty of discrimination if we hired one and not the other based solely on race, sex, skin color, or ancestry. These factors are not morally relevant. People who do treat equals unequally in such ways are known as racists, sexists, bigots, and the like. Probably the most serious charge against ethical egoism is that it discriminates against people in the same fashion. It arbitrarily treats the interests of some people (oneself) as more important than the interests of all others (the rest of the world)—even though there is no morally relevant difference between the two. The failure of ethical egoism to treat equals equally seems a serious defect in the theory. It conflicts with a major component of our moral existence. For many critics, this single defect is enough reason to reject the theory.
UTILITARIANISM Are you a utilitarian? To find out, consider the following scenario: After years of research, a medical scientist—Dr. X—realizes that she is just one step away from developing a cure for all known forms of heart disease. Such a breakthrough would save hundreds of thousands of lives—perhaps millions. The world could finally be rid of heart attacks, strokes, heart failure, and the like, a feat as monumental as the eradication of deadly smallpox. That one last step in her research, however, is technologically feasible but morally problematic. It involves the killing of a single healthy human being to microscopically examine the person’s heart tissue just seconds after the heart stops beating. The crucial piece of information needed to perfect the cure can be acquired only as just described; it cannot be extracted from the heart of a cadaver, an accident victim, someone suffering from a disease, or a person who has been dead for more than sixty seconds. Dr. X decides that the benefits to humanity from the cure are just too great to ignore. She locates a suitable candidate for the operation: a homeless man with no living relatives and no friends—someone who would not be missed. Through some elaborate subterfuge she manages to secretly do what needs to be done, killing the man and successfully performing the operation. She formulates the cure and saves countless lives. No one ever discovers how she obtained the last bit of information she needed to devise the cure, and she feels not the slightest guilt for her actions. Did Dr. X do right? If you think so, then you may be a utilitarian. A utilitarian is more likely to believe that what Dr. X did was right, because it brought about consequences that were more good than bad. One man died, but countless others were saved. If you think that Dr. X did wrong, you may be a nonconsequentialist. A nonconsequentialist is likely to believe that Dr. X did wrong, because of the nature of her action: it was murder. The consequences are beside the point. In this example, we get a hint of some of the elements that have made utilitarianism so attractive (and often controversial) to so many. First, whether or not we agree with the utilitarian view in this case, we can see that it has some plausibility. We tend to think it entirely natural to judge the morality of an action by the effects that it has on the people involved. To decide if we do right or wrong, we want to know whether the consequences of our actions are good or bad, whether they bring pleasure or pain, whether they enhance or diminish the welfare of ourselves and others. Second, the utilitarian formula for distinguishing right and wrong actions seems exceptionally straightforward. We simply calculate which action among several possible actions has the best balance of good over evil, everyone considered—and act accordingly. Moral choice is apparently reduced to a single moral principle and simple math. Third, at least sometimes, we all seem to be utilitarians. We may tell a white lie because the truth would hurt someone’s feelings. We may break a promise because keeping it causes more harm than good. We may want a criminal punished not because he broke the law but because the punishment may deter him from future crimes. We justify such departures from conventional morality on the grounds that they produce better consequences. Utilitarianism is one of the most influential moral theories in history. The English philosopher Jeremy Bentham (1748–1832) was the first to fill out the theory in detail, and the English philosopher and economist John Stuart Mill (1806–73) developed it further. In their hands utilitarianism became a powerful instrument of social reform. It provided a rationale for promoting women’s rights, improving the treatment of prisoners, advocating animal rights, and aiding the poor—all radical ideas in Bentham’s and Mill’s day. In the twenty-first century, the theory still has a strong effect on moral and policy decision making in many areas, including health care, criminal justice, and government. Classic utilitarianism—the kind of act-utilitarianism formulated by Bentham—is the simplest form of the theory. It affirms the principle that the right action is the one that directly produces the best balance of happiness over unhappiness for all concerned. Happiness is an intrinsic good—the only intrinsic good. What matters most is how much net happiness comes directly from performing an action (as opposed to following a rule that applies to such actions). To determine the right action, we need only compute the amount of happiness that each possible action generates and choose the one that generates the most. There are no rules to take into account—just the single, simple utilitarian principle. Each set of circumstances calling for a moral choice is unique, requiring a new calculation of the varying consequences of possible actions. Bentham called the utilitarian principle the principle of utility and asserted that all our actions can be judged by it. (Mill called it the greatest happiness principle.) As Bentham says, By the principle of utility is meant that principle which approves or disapproves of every action whatsoever, according to the tendency which it appears to have to augment or diminish the happiness of the party whose interest is in question: or, what is the same thing in other words, to promote or to oppose that happiness. . . . By utility is meant that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness, (all this in the present case comes to the same thing) or (what comes again to the same thing) to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is considered[.]2 The principle of utility, of course, makes the theory consequentialist. The emphasis on happiness or pleasure makes it hedonistic, for happiness is the only intrinsic good. As you can see, there is a world of difference between the moral focus of utilitarianism (in all its forms) and that of ethical egoism. The point of ethical egoism is to promote one’s own good. An underlying tenet of utilitarianism is that you should promote the good of everyone concerned and that everyone counts equally. When deliberating about which action to perform, you must take into account your own happiness as well as that of everyone else who will be affected by your decision—and no one is to be given privileged status. Such evenhandedness requires a large measure of impartiality, a quality that plays a role in every plausible moral theory. Mill says it best: [T]he happiness which forms the utilitarian standard of what is right in conduct, is not the agent’s own happiness, but that of all concerned. As between his own happiness and that of others, utilitarianism requires him to be as strictly impartial as a disinterested and benevolent spectator. 3 In classic act-utilitarianism, knowing how to tote up the amount of utility, or happiness, generated by various actions is essential. Bentham’s answer to this requirement is the hedonic calculus, which quantifies happiness and handles the necessary calculations. His approach is straight-forward in conception but complicated in the details: For each possible action in a particular situation, determine the total amount of happiness or unhappiness produced by it for one individual (that is, the net happiness—happiness minus unhappiness). Gauge the level of happiness with seven basic characteristics such as intensity, duration, and fecundity (how likely the pleasure or pain is to be followed by more pleasure or pain). Repeat this process for all individuals involved and sum their happiness or unhappiness to arrive at an overall net happiness for that particular action. Repeat for each possible action. The action with the best score (the most happiness or least unhappiness) is the morally right one. Notice that in this arrangement, only the total amount of net happiness for each action matters. How the happiness is distributed among the persons involved does not figure into the calculations. This means that an action that affects ten people and produces one hundred units of happiness is to be preferred over an action that affects those same ten people but generates only fifty units of happiness—even if most of the one hundred units go to just one individual, and the fifty units divide equally among the ten. The aggregate of happiness is decisive; its distribution is not. Classic utilitarianism, though, does ask that any given amount of happiness be spread among as many people as possible—thus the utilitarian slogan “The greatest happiness for the greatest number.” Both Bentham and Mill define happiness as pleasure. In Mill’s words, The creed which accepts as the foundation of morals utility, or the greatest happiness principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By “happiness” is intended pleasure, and the absence of pain; by “unhappiness,” pain, and the privation of pleasure. 4 They differ, though, on the nature of happiness and how it should be measured. Bentham thinks that happiness varies only in quantity—different actions produce different amounts of happiness. To judge the intensity, duration, or fecundity of happiness is to calculate its quantity. Mill contends that happiness can vary in quantity and quality. There are lower pleasures, such as eating, drinking, and having sex, and there are higher pleasures, such as pursuing knowledge, appreciating beauty, and creating art. The higher pleasures are superior to the lower ones. The lower ones can be intense and enjoyable, but the higher ones are qualitatively better and more fulfilling. In this scheme, a person enjoying a mere taste of a higher pleasure may be closer to the moral ideal than a hedonistic glutton who gorges on lower pleasures. Thus Mill declared, “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.”5 In Bentham’s view, the glutton—who acquires a larger quantity of pleasure—would be closer to the ideal. The problem for Mill is to justify his hierarchical ranking of the various pleasures. He tries to do so by appealing to what the majority prefers—that is, the majority of people who have experienced both the lower and higher pleasures. But this approach probably will not help, because people can differ drastically in how they rank pleasures. It is possible, for example, that a majority of people who have experienced a range of pleasures would actually disagree with Mill’s rankings. In fact, any effort to devise such rankings using the principle of utility seems unlikely to succeed. Many critics have argued that the idea of defining right action in terms of some intrinsic nonmoral good (whether pleasure, happiness, or anything else) is seriously problematic. Attempts to devise such a definition have been fraught with complications—a major one being that people have different ideas about what things are intrinsically valuable. Some utilitarians have tried to sidestep these difficulties by insisting that maximizing utility means maximizing people’s preferences, whatever they are. This formulation seems to avoid some of the difficulties just mentioned but falls prey to another: some people’s preferences may be clearly objectionable when judged by almost any moral standard, whether utilitarian or nonconsequentialist. Some people, after all, have ghastly preferences—preferences, say, for torturing children or killing innocent people for fun. Some critics say that repairing this preference utilitarianism to avoid sanctioning objectionable actions seems unlikely without introducing some nonutilitarian moral principles such as justice, rights, and obligations. Like act-utilitarianism, rule-utilitarianism aims at the greatest good for all affected individuals, but it maintains that we travel an indirect route to that goal. In rule-utilitarianism, the morally right action is not the one that directly brings about the greatest good but the one covered by a rule that, if followed consistently, produces the greatest good for all. In act-utilitarianism, we must examine each action to see how much good (or evil) it generates. Rule-utilitarianism would have us first determine what rule an action falls under, then see if that rule would likely maximize utility if everyone followed it. In effect, the rule-utilitarian asks, “What if everyone followed this rule?” An act-utilitarian tries to judge the rightness of actions by the consequences they produce, occasionally relying on “rules of thumb” (such as “Usually we should not harm innocents”) merely to save time. A rule-utilitarian, however, tries to follow every valid rule—even if doing so may not maximize utility in a specific situation. In our example featuring Dr. X and the cure for heart disease, an act-utilitarian might compare the net happiness produced by performing the lethal operation and by not performing it, opting finally for the former because it maximizes happiness. A rule-utilitarian, on the other hand, would consider what moral rules seem to apply to the situation. One rule might be “It is permissible to conduct medical procedures or experiments on people without their full knowledge and consent in order to substantially advance medical science.” Another one might say “Do not conduct medical procedures or experiments on people without their full knowledge and consent.” If the first rule is generally followed, happiness is not likely to be maximized in the long run. Widespread adherence to this rule would encourage medical scientists and physicians to murder patients for the good of science. Such practices would outrage people and cause them to fear and distrust science and the medical profession, leading to the breakdown of the entire health care system and most medical research. But if the second rule is consistently adhered to, happiness is likely to be maximized over the long haul. Trust in physicians and medical scientists would be maintained, and promising research could continue as long as it was conducted with the patient’s consent. The right action, then, is for Dr. X not to perform the gruesome operation. Applying the Theory Let us apply utilitarianism to another type of case. Imagine that for more than a year a terrorist has been carrying out devastating attacks in a developing country, killing hundreds of innocent men, women, and children. He seems unstoppable. He always manages to elude capture. In fact, because of his stealth, the expert assistance of a few accomplices, and his support among the general population, he will most likely never be captured or killed. The authorities have no idea where he hides or where he will strike next. But they are sure that he will go on killing indefinitely. They have tried every tactic they know to put an end to the slaughter, but it goes on and on. Finally, as a last resort, the chief of the nation’s antiterrorist police orders the arrest of the terrorist’s family—a wife and seven children. The chief intends to kill the wife and three of the children right away (to show that he is serious), then threaten to kill the remaining four unless the terrorist turns himself in. There is no doubt that the chief will make good on his intentions, and there is excellent reason to believe that the terrorist will indeed turn himself in rather than allow his remaining children to be executed. Suppose that the chief has only two options: (1) refrain from murdering the terrorist’s family and continue with the usual antiterrorist tactics (which have only a tiny chance of being successful); or (2) kill the wife and three of the children and threaten to kill the rest (a strategy with a very high chance of success). According to utilitarianism, which action is right? Peter Singer, Utilitarian The distinguished philosopher Peter Singer is arguably the most famous (and controversial) utilitarian of recent years. Many newspaper and magazine articles have been written about him, and many people have declared their agreement with, or vociferous opposition to, his views. This is how one magazine characterizes Singer and his ideas: The New Yorker calls him “the most influential living philosopher.” His critics call him “the most dangerous man in the world.” Peter Singer, the De Camp Professor of Bioethics at Princeton University’s Center for Human Values, is most widely and controversially known for his view that animals have the same moral status as humans. . . . Singer is perhaps the most thoroughgoing philosophical utilitarian since Jeremy Bentham. As such, he believes animals have rights because the relevant moral consideration is not whether a being can reason or talk but whether it can suffer. Jettisoning the traditional distinction between humans and nonhumans, Singer distinguishes instead between persons and non-persons. Persons are beings that feel, reason, have self-awareness, and look forward to a future. Thus, fetuses and some very impaired human beings are not persons in his view and have a lesser moral status than, say, adult gorillas and chimpanzees. Given such views, it was no surprise that antiabortion activists and disability rights advocates loudly decried the Australian-born Singer’s appointment at Princeton last year. Indeed, his language regarding the treatment of disabled human beings is at times appallingly similar to the eugenic arguments used by Nazi theorists concerning “life unworthy of life.” Singer, however, believes that only parents, not the state, should have the power to make decisions about the fates of disabled infants.* *Ronald Bailey, excerpts from “The Pursuit of Happiness, Peter Singer Interviewed by Ronald Bailey.” Reason Magazine, December 2000. Reprinted with permission from Reason Magazine and Reason.com. As an act-utilitarian, the chief might reason like this: Action 2 would probably result in a net gain of happiness, everyone considered. Forcing the terrorist to turn himself in would save hundreds of lives. His killing spree would be over. The general level of fear and apprehension in the country might subside, and even the economy—which has slowed because of terrorism—might improve. The prestige of the terrorism chief and his agents might increase. On the downside, performing Action 2 would guarantee that four innocent people (and perhaps eight) would lose their lives, and the terrorist (whose welfare must also be included in the calculations) would be imprisoned for life or executed. In addition, many citizens would be disturbed by the killing of innocent people and the flouting of the law by the police, believing that these actions are wrong and likely to set a dangerous precedent. Over time, though, these misgivings may diminish. All things considered, then, Action 2 would probably produce more happiness than unhappiness. Action 1, on the other hand, maintains the status quo. It would allow the terrorist to continue murdering innocent people and spreading fear throughout the land—a decidedly unhappy result. It clearly would produce more unhappiness than happiness. Action 2, therefore, would produce the most happiness and would therefore be the morally right option. As a rule-utilitarian, the chief might make a different choice. He would have to decide what rules would apply to the situation then determine which one, if consistently followed, would yield the most utility. Suppose he must decide between Rule 1 and Rule 2. Rule 1 says, “Do not kill innocent people in order to prevent terrorists from killing other innocent people.” Rule 2 says, “Killing innocent people is permissible if it helps to stop terrorist attacks.” The chief might deliberate as follows: We can be confident that consistently following Rule 2 would have some dire consequences for society. Innocent people would be subject to arbitrary execution, civil rights would be regularly violated, the rule of law would be severely compromised, and trust in government would be degraded. In fact, adhering to Rule 2 might make people more fearful and less secure than terrorist attacks would; it would undermine the very foundations of a free society. In a particular case, killing innocent people to fight terror could possibly have more utility than not killing them. But whether such a strategy would be advantageous to society over the long haul is not at all certain. Consistently following Rule 1 would have none of these unfortunate consequences. If so, a society living according to Rule 1 would be better off than one adhering to Rule 2, and therefore the innocent should not be killed to stop the terrorist. QUICK REVIEW principle of utility—Bentham’s definition: “that principle which approves or disapproves of every action whatsoever, according to the tendency which it appears to have to augment or diminish the happiness of the party whose interest is in question.” greatest happiness principle—Mill’s definition: the principle that “holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.” Evaluating the Theory Bentham and Mill do not offer ironclad arguments demonstrating that utilitarianism is the best moral theory. Mill, however, does try to show that the principle of utility is at least a plausible basis for the theory. After all, he says, humans by nature desire happiness and nothing but happiness. If so, happiness is the standard by which we should judge human conduct, and therefore the principle of utility is the heart of morality. But this kind of moral argument is controversial, because it reasons from what is to what should be. In addition, as pointed out in the discussion of psychological egoism, the notion that happiness is our sole motivation is dubious. What can we learn about utilitarianism by applying the moral criteria of adequacy? Let us begin with classic act-utilitarianism and deal with rule-utilitarianism later. We can also postpone discussion of the minimum requirement of coherence, because critics have been more inclined to charge rule-utilitarianism than act-utilitarianism with having significant internal inconsistencies. If we begin with Criterion 1 (consistency with considered judgments), we run into what some have called act-utilitarianism’s most serious problem: It conflicts with commonsense views about justice. Justice requires equal treatment of persons. It demands, for example, that goods such as happiness be distributed fairly, that we not harm one person to make several other persons happy. Utilitarianism says that everyone should be included in utility calculations, but it does not require that everyone get an equal share. Consider this famous scenario from the philosopher H. J. McCloskey: While a utilitarian is visiting an area plagued by racial tension, a black man rapes a white woman. Race riots ensue, and white mobs roam the streets, beating and lynching black people as the police secretly condone the violence and do nothing to stop it. The utilitarian realizes that by giving false testimony, he could bring about the quick arrest and conviction of a black man whom he picks at random. As a result of this lie, the riots and the lynchings would stop, and innocent lives would be spared. As a utilitarian, he believes he has a duty to bear false witness to punish an innocent person. If right actions are those that maximize happiness, then it seems that the utilitarian would be doing right by framing the innocent person. The innocent person, of course, would experience unhappiness (he might be sent to prison or even executed), but framing him would halt the riots and prevent many other innocent people from being killed, resulting in a net gain in overall happiness. Framing the innocent is unjust, though, and our considered moral judgments would be at odds with such an action. Here the commonsense idea of justice and the principle of utility collide. The conflict raises doubts about act-utilitarianism as a moral theory. Here is another famous example: This time you are to imagine yourself to be a surgeon, a truly great surgeon. Among other things you do, you transplant organs, and you are such a great surgeon that the organs you transplant always take. At the moment you have five patients who need organs. Two need one lung each, two need a kidney each, and the fifth needs a heart. If they do not get those organs today, they will all die; if you find organs for them today, you can transplant the organs and they will all live. But where to find the lungs, the kidneys, and the heart? The time is almost up when a report is brought to you that a young man who has just come into your clinic for his yearly check-up has exactly the right blood type, and is in excellent health. Lo, you have a possible donor. All you need do is cut him up and distribute his parts among the five who need them. You ask, but he says, “Sorry. I deeply sympathize, but no.” Would it be morally permissible for you to operate anyway?6 This scenario involves the possible killing of an innocent person for the good of others. There seems little doubt that carrying out the murder and transplanting the victim’s organs into five other people (and thus saving their lives) would maximize utility (assuming, of course, that the surgeon’s deed would not become public, he or she suffered no untoward psychological effects, etc.). Compared with the happiness produced by doing the transplants, the unhappiness of the one unlucky donor seems minor. Therefore, according to act-utilitarianism, you (the surgeon) should commit the murder and do the transplants. But this choice appears to conflict with our considered moral judgments. Killing the healthy young man to benefit the five unhealthy ones seems unjust. Look at one final case. Suppose a tsunami devastates a coastal area of Singapore. Relief agencies arrive on the scene to distribute food, shelter, and medical care to 100 tsunami victims—disaster aid that amounts to, say, 1,000 units of happiness. There are only two options for the distribution of the 1,000 units. Option A is to divide the 1,000 units equally among all 100 victims, supplying 10 units to each person. Option B is to give 901 units to one victim (who happens to be the richest man in the area) and 99 units to the remaining victims, providing 1 unit per person. Both options distribute the same amount of happiness to the victims—1,000 units. Following the dictates of act-utilitarianism, we would have to say that the two actions (options) have equal utility and so are equally right. But this seems wrong. It seems unjust to distribute the units of happiness so unevenly when all recipients are equals in all morally relevant respects. Like the other examples, this one suggests that act-utilitarianism may be an inadequate theory. Detractors also make parallel arguments against the theory in many cases besides those involving injustice. A familiar charge is that act-utilitarianism conflicts with our commonsense judgments both about people’s rights and about their obligations to one another. Consider first this scenario about rights: Mr. Y is a nurse in a care facility for the elderly. He tends to many bedridden patients who are in pain most of the time, are financial and emotional burdens to their families, and are not expected to live more than a few weeks. Despite their misery, they do not wish for death; they want only to be free of pain. Mr. Y, an act-utilitarian, sees that there would be a lot more happiness in the world and less pain if these patients died sooner rather than later. He decides to take matters into his own hands, so he secretly gives them a drug that kills them quietly and painlessly. Their families and the facility staff feel enormous relief. No one will ever know what Mr. Y has done, and no one suspects foul play. He feels no guilt—only immense satisfaction knowing that he has helped make the world a better place. If Mr. Y does indeed maximize happiness in this situation, then his action is right, according to act-utilitarianism. Yet most people would probably say that he violated the rights of his patients. The commonsense view is that people have certain rights that should not be violated merely to create a better balance of happiness over unhappiness. Another typical criticism of act-utilitarianism is that it appears to fly in the face of our considered judgments about our obligations to other people. Suppose Ms. Z must decide between two actions: Action A will produce 1,001 units of happiness; Action B, 1,000 units. The only other significant difference between them is that Action A entails the breaking of a promise. By act-utilitarian lights, Ms. Z should choose Action A because it yields more happiness than Action B does. But we tend to think that keeping a promise is more important than a tiny gain in happiness. We often try to keep our promises even when we know that doing so will result in a decrease in utility. Some say that if our obligations to others sometimes outweigh considerations of overall happiness, then act-utilitarianism must be problematic. 7 What can an act-utilitarian say to rebut these charges about justice, rights, and obligations? One frequent response goes like this: The scenarios put forth by critics (such as the cases just cited) are misleading and implausible. They are always set up so that actions regarded as immoral produce the greatest happiness, leading to the conclusion that utilitarianism conflicts with commonsense morality and therefore cannot be an adequate moral theory. But in real life these kinds of actions almost never maximize happiness. In the case of Dr. X, her crime would almost certainly be discovered by physicians or other scientists, and she would be exposed as a murderer. This revelation would surely destroy her career, undermine patient-physician trust, tarnish the reputation of the scientific community, dry up funding for legitimate research, and prompt countless lawsuits. Scientists might even refuse to use the data from Dr. X’s research because she obtained them through a heinous act. As one philosopher put it, “Given a clearheaded view of the world as it is and a realistic understanding of man’s nature, it becomes more and more evident that injustice will never have, in the long run, greater utility than justice. . . . Thus injustice becomes, in actual practice, a source of great social disutility.”8 The usual response to this defense is that the act-utilitarian is probably correct that most violations of commonsense morality do not maximize happiness—but at least some violations do. At least sometimes actions that have the best consequences do conflict with our credible moral principles or considered moral judgments. The charge is that the act-utilitarian cannot plausibly dismiss all counterexamples, and only one counterexample is required to show that maximizing utility is not a necessary and sufficient condition for right action. 9 Unlike ethical egoism, act-utilitarianism (as well as rule-utilitarianism) does not fail Criterion 2 (consistency with our moral experiences), so we can move on to Criterion 3 (usefulness in moral problem solving). On this score, some scholars argue that act-utilitarianism deserves bad marks. Probably their most common complaint is what has been called the no-rest problem. Utilitarianism (in all its forms) requires that in our actions we always try to maximize utility, everyone considered. Say you are watching television. Utilitarianism would have you ask yourself, “Is this the best way to maximize happiness for everyone?” Probably not. You could be giving to charity or working as a volunteer for the local hospital or giving your coat to a homeless person or selling everything you own to buy food for hungry children. Whatever you are doing, there usually is something else you could do that would better maximize net happiness for everyone. If act-utilitarianism does demand too much of us, then its usefulness as a guide to the moral life is suspect. One possible reply to this criticism is that the utilitarian burden can be lightened by devising rules that place limits on supererogatory actions. Another reply is that our moral common sense is simply wrong on this issue—we should be willing to perform, as our duty, many actions that are usually considered supererogatory. If necessary, we should be willing to give up our personal ambitions for the good of everyone. We should be willing, for example, to sacrifice a very large portion of our resources to help the poor. To some, this reply seems questionable precisely because it challenges our commonsense moral intuitions—the very intuitions that we use to measure the plausibility of our moral judgments and principles. Moral common sense, they say, can be mistaken, and our intuitions can be tenuous or distorted—but we should cast them aside only for good reasons. But a few utilitarians directly reject this appeal to common sense, declaring that relying so heavily on such intuitions is a mistake: Admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view “so much the worse for the common moral consciousness.” That is, I was inclined to reject the common methodology of testing general ethical principles by seeing how they square with our feelings in particular instances. 10 These utilitarians would ask, Isn’t it possible that in dire circumstances, saving a hundred innocent lives by allowing one to die would be the best thing to do even though allowing that one death would be a tragedy? Aren’t there times when the norms of justice and duty should be ignored for the greater good of society? To avoid the problems that act-utilitarianism is alleged to have, some utilitarians have turned to rule-utilitarianism. By positing rules that should be consistently followed, rule-utilitarianism seems to align its moral judgments closer to those of common sense. And the theory itself is based on ideas about morality that seem perfectly sensible: In general, rule utilitarianism seems to involve two rather plausible intuitions. In the first place, rule utilitarians want to emphasize that moral rules are important. Individual acts are justified by being shown to be in accordance with correct moral rules. In the second place, utility is important. Moral rules are shown to be correct by being shown to lead, somehow, to the maximization of utility. . . . Rule utilitarianism, in its various forms, tries to combine these intuitions into a single, coherent criterion of morality. 11 But some philosophers have accused the theory of being internally inconsistent. They say, in other words, that it fails the minimum requirement of coherence. (If so, we can forgo discussion of our three criteria of adequacy.) They argue as follows: Rule-utilitarianism says that actions are right if they conform to rules devised to maximize utility. Rules with exceptions or qualifications, however, maximize utility better than rules without them. For example, a rule like “Do not steal except in these circumstances” maximizes utility better than the rule “Do not steal.” It seems, then, that the best rules are those with amendments that make them as specific as possible to particular cases. But if the rules are changed in this way to maximize utility, they would end up mandating the same actions that act-utilitarianism does. They all would say, in effect, “do not do this except to maximize utility.” Rule-utilitarianism would lapse into act-utilitarianism. Some rule-utilitarians respond to this criticism by denying that rules with a lot of exceptions would maximize utility. They say that people might fear that their own well-being would be threatened when others make multiple exceptions to rules. You might be reassured by a rule such as “Do not harm others” but feel uneasy about the rule “Do not harm others except in this situation.” What if you end up in that particular situation? Those who criticize the theory admit that it is indeed possible for an exception-laden rule to produce more unhappiness than happiness because of the anxiety it causes. But, they say, it is also possible for such a rule to generate a very large measure of happiness—large enough to more than offset any ill effects spawned by rule exceptions. If so, then rule-utilitarianism could easily slip into act-utilitarianism, thus exhibiting all the conflicts with commonsense morality that act-utilitarianism is supposed to have.
LEARNING FROM UTILITARIANISM
Regardless of how much credence we give to the arguments for and against utilitarianism, we must admit that the theory seems to embody a large part of the truth about morality. First, utilitarianism begs us to consider that the consequences of our actions do indeed make a difference in our moral deliberations. Whatever factors work to make an action right (or wrong), surely the consequences of what we do must somehow be among them. Even if lying is morally wrong primarily because of the kind of act it is, we cannot plausibly think that a lie that saves a thousand lives is morally equivalent to one that changes nothing. Sometimes our considered moral judgments may tell us that an action is right regardless of the good (or evil) it does. And sometimes they may say that the good it does matters a great deal.
CRITICAL THOUGHT: Cross-Species Transplants: What Would a Utilitarian Do?
Like any adequate moral theory, utilitarianism should be able to help us resolve moral problems, including new moral issues arising from advances in science and medicine. A striking example of one such issue is cross-species transplantation, the transplanting of organs from one species to another, usually from nonhuman animals to humans. Scientists are already bioengineering pigs so their organs will not provoke tissue rejection in human recipients. Pigs are thought to be promising organ donors because of the similarities between pig and human organs. Many people are in favor of such research because it could open up new sources of transplantable organs, which are now in short supply and desperately needed by thousands of people whose organs are failing.
Would an act-utilitarian be likely to condone cross-species transplants of organs? If so, on what grounds? Would the unprecedented, “unnatural” character of these operations bother a utilitarian? Why or why not? Would you expect an act-utilitarian to approve of cross-species organ transplants if they involved the killing of one hundred pigs for every successful transplant? If only a very limited number of transplants could be done successfully each year, how do you think an act-utilitarian would decide who gets the operations? Would she choose randomly? Would she ever be justified (by utilitarian considerations) in, say, deciding to save a rich philanthropist while letting a poor person die for lack of a transplant?
Second, utilitarianism—perhaps more than any other moral theory—incorporates the principle of impartiality, a fundamental pillar of morality itself. Everyone concerned counts equally in every moral decision. As Mill says, when we judge the rightness of our actions, utilitarianism requires us to be “as strictly impartial as a disinterested and benevolent spectator.” Discrimination is forbidden, and equality reigns. We would expect no less from a plausible moral theory.
Third, utilitarianism is through and through a moral theory for promoting human welfare. At its core is the moral principle of beneficence—the obligation to act for the well-being of others. Beneficence is not the whole of morality, but to most people it is at least close to its heart.
C H A P T E R 6 Nonconsequentialist Theories: Do Your Duty For the consequentialist, the rightness of an action depends entirely on the effects of that action (or of following the rule that governs it). Good effects make the deed right; bad effects make the deed wrong. But for the nonconsequentialist (otherwise known as a deontologist), the rightness of an action can never be measured by such a variable, contingent standard as the quantity of goodness brought into the world. Rightness derives not from the consequences of an action but from its nature, its right-making characteristics. An action is right (or wrong) not because of what it produces but because of what it is. Yet for all their differences, both consequentialist and deontological theories contain elements that seem to go to the heart of morality and our moral experience. So in this chapter, we look at ethics through a deontological lens and explore the two deontological theories that historically have offered the strongest challenges to consequentialist views: Kant’s moral system and natural law theory.
KANT’S ETHICS The German philosopher Immanuel Kant (1724–1804) is considered one of the greatest moral philosophers of the modern era. Many scholars would go further and say that he is the greatest moral philosopher of the modern era. As a distinguished thinker of the Enlightenment, he sought to make reason the foundation of morality. For him, reason alone leads us to the right and the good. Therefore, to discover the true path we need not appeal to utility, religion, tradition, authority, happiness, desires, or intuition. We need only heed the dictates of reason, for reason informs us of the moral law just as surely as it reveals the truths of mathematics. Because of each person’s capacity for reason, he or she is a sovereign in the moral realm, a supreme judge of what morality demands. What morality demands (in other words, our duty) is enshrined in the moral law—the changeless, necessary, universal body of moral rules. In Kant’s ethics, right actions have moral value only if they are done with a “good will”—that is, a will to do your duty for duty’s sake. To act with a good will is to act with a desire to do your duty simply because it is your duty, to act out of pure reverence for the moral law. Without a good will, your actions have no moral worth—even if they accord with the moral law, even if they are done out of sympathy or love, even if they produce good results. Only a good will is unconditionally good, and only an accompanying good will can give your talents, virtues, and actions moral worth. As Kant explains, Nothing can possibly be conceived in the world, or even out of it, which can be called good without qualification, except a good will. Intelligence, wit, judgement, and the other talents of the mind, however they may be named, or courage, resolution, perseverance, as qualities of temperament, are undoubtedly good and desirable in many respects; but these gifts of nature may also become extremely bad and mischievous if the will which is to make use of them, and which, therefore, constitutes what is called character, is not good. It is the same with the gifts of fortune. Power, riches, honour, even health, and the general well-being and contentment with one’s condition which is called happiness, inspire pride, and often presumption, if there is not a good will to correct the influence of these on the mind. . . . A good will is good not because of what it performs or effects, not by its aptness for the attainment of some proposed end, but simply by virtue of the volition—that is, it is good in itself, and considered by itself is to be esteemed much higher than all that can be brought about by it in favour of any inclination, nay, even of the sum-total of all inclinations.1 So to do right, we must do it for duty’s sake, motivated solely by respect for the moral law. But how do we know what the moral law is? Kant sees the moral law as a set of principles, or rules, stated in the form of imperatives, or commands. Imperatives can be hypothetical or categorical. A hypothetical imperative tells us what we should do if we have certain desires: for example, “If you need money, work for it” or “If you want orange juice, ask for it.” We should obey such imperatives only if we desire the outcomes specified. A categorical imperative, however, is not so iffy. It tells us that we should do something in all situations regardless of our wants and needs. A moral categorical imperative expresses a command like “Do not steal” or “Do not commit suicide.” Such imperatives are universal and unconditional, containing no stipulations contingent on human desires or preferences. Kant says that the moral law consists entirely of categorical imperatives. They are the authoritative expression of our moral duties. Because they are the products of rational insight and we are rational agents, we can straightforwardly access, understand, and know them as the great truths that they are. Kant says that all our duties, all the moral categorical imperatives, can be logically derived from a principle that he calls the categorical imperative. It tells us to “Act only on that maxim through which you can at the same time will that it should become a universal law.“2 (Kant actually devised three statements, or versions, of the principle, the one given here and two others; in the next few pages we will examine only the two most important ones.) Kant believes that every action implies a general rule, or maxim. If you steal a car, then your action implies a maxim such as “In this situation, steal a car if you want one.” So the first version of the categorical imperative says that an action is right if you can will that the maxim of an action becomes a moral law applying to all persons. That is, an action is permissible if (1) its maxim can be universalized (if everyone can consistently act on the maxim in similar situations); and (2) you would be willing to let that happen. If you can so will the maxim, then the action is right (permissible). If you cannot, the action is wrong (prohibited). Right actions pass the test of the categorical imperative; wrong actions do not. Some of the duties derived from the categorical imperative are, in Kant’s words, perfect duties and some, imperfect duties. Perfect duties are those that absolutely must be followed without fail; they have no exceptions. Some perfect duties cited by Kant include duties not to break a promise, not to lie, and not to commit suicide. Imperfect duties are not always to be followed; they do have exceptions. As examples of imperfect duties, Kant mentions duties to develop your talents and to help others in need. Kant demonstrates how to apply the first version of the categorical imperative to several cases, the most famous of which involves a lying promise. Imagine that you want to borrow money from someone, and you know you will not be able to repay the debt. You also know that you will get the loan if you falsely promise to pay the money back. Is such deceptive borrowing morally permissible? To find out, you have to devise a maxim for the action and ask whether you could consistently will it to become a universal law. Could you consistently will everyone to act on the maxim “If you need money, make a lying promise to borrow some“? Kant’s emphatic answer is no. If all persons adopted this rule, then they would make lying promises to obtain loans. But then everyone would know that such promises are false, and the practice of giving loans based on a promise would no longer exist, because no promises could be trusted. The maxim says that everyone should make a false promise in order to borrow money, but then no one would loan money on the basis of a promise. If acted on by everyone, the maxim would defeat itself. As Kant says, the “maxim would necessarily destroy itself as soon as it was made a universal law.”3 Therefore, you cannot consistently will the maxim to become a universal law. The action, then, is not morally permissible. CRITICAL THOUGHT: Sizing Up the Golden Rule The Golden Rule—“Do unto others as you would have them do unto you”—has some resemblance to Kant’s ethics and has been, in one form or another, implicit in many religious traditions and moral systems. Moral philosophers generally think that it touches on a significant truth about morality. But some have argued that taken by itself, without the aid of any other moral principles or theory, the Golden Rule can lead to implausible conclusions and absurd results. Here is part of a famous critique by Richard Whately (1787–1863): Supposing any one should regard this golden rule as designed to answer the purpose of a complete system of morality, and to teach us the difference of right and wrong; then, if he had let his land to a farmer, he might consider that the farmer would be glad to be excused paying any rent for it, since he would himself, if he were the farmer, prefer having the land rent-free; and that, therefore, the rule of doing as he would be done by requires him to give up all his property. So also the shopkeeper might, on the same principle, think that the rule required him to part with his goods under prime cost, or to give them away, and thus to ruin himself. Now such a procedure would be absurd. . . . You have seen, then, that the golden rule was far from being designed to impart to men the first notions of justice. On the contrary, it presupposes that knowledge; and if we had no such notions, we could not properly apply the rule. But the real design of it is to put us on our guard against the danger of being blinded by self-interest.* How does the Golden Rule resemble Kant’s theory? How does it differ? Do you agree with Whately’s criticism? Why or why not? How could the Golden Rule be qualified or supplemented to blunt Whately’s critique? John Stuart Mill said that the Golden Rule was the essence of utilitarianism. What do you think he meant by this? *Richard Whately, quoted in Louis P. Pojman and Lewis Vaughn, The Moral Life (New York: Oxford University Press, 2007), 353–54. Kant believes that besides the rule forbidding the breaking of promises, the categorical imperative generates several other duties. Among these he includes prohibitions against committing suicide, lying, and killing innocent people. Some universalized maxims may fail the test of the categorical imperative (first version) not by being self-defeating (as in the case of a lying promise) but by constituting rules that you would not want everyone else to act on. (Remember that an action is permissible if everyone can consistently act on it in similar situations and you would be willing to let that happen.) Kant asks us to consider a maxim that mandates not contributing anything to the welfare of others or aiding them when they are in distress. If you willed this maxim to become a universal moral law (if everyone followed it), no self-defeating state of affairs would obtain. Everyone could conceivably follow this rule. But you probably would not want people to act on this maxim because one day you may need their help and sympathy. Right now you may will the maxim to become universal law, but later, when the tables are turned, you may regret that policy. The inconsistency lies in wanting the rule to be universalized and not wanting it to be universalized. Kant says that this alternative kind of inconsistency shows that the action embodied in the maxim is not permissible. Kant’s second version of the categorical imperative is probably more famous and influential than the first. (Kant thought the two versions virtually synonymous, but they seem to be distinct principles.) He declares, “So act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.”4 This rule—the means-end principle—says that we must always treat people (including ourselves) as ends in themselves, as creatures of great intrinsic worth, never merely as things of instrumental value, never merely as tools to be used for someone else’s purpose. This statement of the categorical imperative reflects Kant’s view of the status of rational beings, or persons. Persons have intrinsic value and dignity because they, unlike the rest of creation, are rational agents who are free to choose their own ends, legislate their own moral laws, and assign value to things in the world. Persons are the givers of value, so they must have ultimate value. They therefore must always be treated as ultimate ends and never merely as means. Kant’s idea is that people not only have intrinsic worth—they also have equal intrinsic worth. Each rational being has the same inherent value as every other rational being. This equality of value cannot be altered by, and has no connection to, social and economic status, racial and ethnic considerations, or the possession of prestige or power. Any two persons are entitled to the same moral rights, even if one is rich, wise, powerful, and famous—and the other is not. To treat people merely as a means rather than as an end is to fail to recognize the true nature and status of persons. Since people are by nature free, rational, autonomous, and equal, we treat them merely as a means if we do not respect these attributes—if we, for example, interfere with people’s right to make informed choices by lying to them, inhibit their free and autonomous actions by enslaving or coercing them, or violate their equality by discriminating against them. For Kant, lying or breaking a promise is wrong because to do so is to use people merely as a means to an end, rather than as an end in themselves. Sometimes we use people to achieve some end, yet our actions are not wrong. To see why, we must understand that there is a moral difference between treating persons as a means and treating them merely, or only, as a means. We may treat a mechanic as a means to repair our cars, but we do not treat him merely as a means if we also respect his status as a person. We do not treat him only as means if we neither restrict his freedom nor ignore his rights. As noted earlier, Kant insists that the two versions of the categorical imperative are two ways of stating the same idea. But the two principles seem to be distinct, occasionally leading to different conclusions about the rightness of an action. The maxim of an action, for example, may pass the first version (be permissible) by being universalizable but fail the second by not treating persons as ends. A more plausible approach is to view the two versions not as alternative tests but as a single two-part test that an action must pass to be judged morally permissible. So before we can declare a maxim a bona fide categorical imperative, we must be able to consistently will it to become a universal law and know that it would have us treat persons not only as a means but as ends. Applying the Theory How might a Kantian decide the case of the anti-terrorist chief of police, discussed in Chapter 5, who considers killing a terrorist’s wife and children? Recall that the terrorist is murdering hundreds of innocent people each year and that the chief has good reasons to believe that killing the wife and children (who are also innocent) will end the terrorist’s attacks. Recall also the verdict on this case rendered from both the act- and rule-utilitarian perspectives. By act-utilitarian lights, the chief should kill some of the terrorist’s innocent relatives (and threaten to kill others). The rule-utilitarian view, however, is that the chief should not kill them. Suppose the maxim in question is “When the usual antiterrorist tactics fail to stop terrorists from killing many innocent people, the authorities should kill (and threaten to kill) the terrorists’ relatives.” Can we consistently will this maxim to become a universal law? Does this maxim involve treating persons merely as a means to an end rather than an end in themselves? To answer the first question, we should try to imagine what would happen if everyone in the position of the relevant authorities followed this maxim. Would any inconsistencies or self-defeating states of affairs arise? We can see that the consequences of universalizing the maxim would not be pleasant. The authorities would kill the innocent—actions that could be as gruesome and frightening as terrorist attacks. But our willing that everyone act on the maxim would not be self-defeating or otherwise contradictory. Would we nevertheless be willing to live in a world where the maxim was universally followed? Again, there seems to be no good reason why we could not. The maxim therefore passes the first test of the categorical imperative. To answer the second (ends-means) question, we must inquire whether following the maxim would involve treating someone merely as a means. The obvious answer is yes. This antiterrorism policy would use the innocent relatives of terrorists as a means to stop terrorist acts. Their freedom and their rights as persons would be violated. The maxim therefore fails the second test, and the acts sanctioned by the maxim would not be permissible. From the Kantian perspective, using the innocent relatives would be wrong no matter what—regardless of how many lives the policy would save or how much safer the world would be. So in this case, the Kantian verdict would coincide with that of rule-utilitarianism but not that of act-utilitarianism. Evaluating the Theory Kant’s moral theory meets the minimum requirement of coherence and is generally consistent with our moral experience (Criterion 2). In some troubling ways, however, it seems to conflict with our commonsense moral judgments (Criterion 1) and appears to have some flaws that restrict its usefulness in moral problem solving (Criterion 3). As we saw earlier, some duties generated by the categorical imperative are absolute—they are, as Kant says, perfect duties, allowing no exceptions whatsoever. We have, for example, a perfect (exceptionless) duty not to lie—ever. But what should we do if lying is the only way to prevent a terrible tragedy? Suppose a friend of yours comes to your house in a panic and begs you to hide her from an insane man intent on murdering her. No sooner do you hide her in the cellar than the insane man appears at your door with a bloody knife in his hand and asks where your friend is. You have no doubt that the man is serious and that your friend will in fact be brutally murdered if the man finds her. Imagine that you have only two choices (and saying “I don’t know” is not one of them): either you lie to the man and thereby save your friend’s life, or you tell the man where she is hiding and guarantee her murder. Kant actually considers such a case and renders this verdict on it: you should tell the truth though the heavens fall. He says, as he must, that the consequences of your action here are irrelevant, and not lying is a perfect duty. There can be no exceptions. Yet Kant’s answer seems contrary to our considered moral judgments. Moral common sense seems to suggest that in a case like this, saving a life would be much more important than telling the truth. The Kantian View of Punishment Kant’s philosophical position on punishment is radically different from that of the utilitarians. Generally, they think that criminals should not be punished for purposes of justice or retribution. Criminals should be corrected or schooled so they do not commit more crimes, or they should be imprisoned only to protect the public. To them, the point of “punishment” is to promote the good of society. Kant thinks that criminals should be punished only because they perpetrated crimes; the public good is irrelevant. In addition, Kant thinks that the central principle of punishment is that the punishment should fit the crime. For Kant, this principle constitutes a solid justification for capital punishment: killers should be killed. As Kant explains, Even if a civil society resolved to dissolve itself with the consent of all its members—as might be supposed in the case of a people inhabiting an island resolving to separate and scatter throughout the whole world—that last murderer lying in prison ought to be executed before the resolution was carried out. This ought to be done in order that every one may realize the desert of his deeds, and that blood-guiltiness may not remain on the people; for otherwise they will all be regarded as participants in the murder as a public violation of justice.* *Immanuel Kant, The Philosophy of Law, trans. W. Hostie (Edinburgh: Clark, 1887), 198. Another classic example involves promise keeping, which is also a perfect duty. Suppose you promise to meet a friend for lunch, and on your way to the restaurant you are called upon to help someone injured in a car crash. No one else can help her, and she will die unless you render aid. But if you help her, you will break your promise to meet your friend. What should you do? Kant would say that come what may, your duty is to keep your promise to meet your friend. Under these circumstances, however, keeping the promise just seems wrong. These scenarios are significant because, contrary to Kant’s view, we seem to have no absolute, or exceptionless, moral duties. We can easily imagine many cases like those just mentioned. Moreover, we can also envision situations in which we must choose between two allegedly perfect duties, each one prohibiting some action. We cannot fulfill both duties at once, and we must make a choice. Such conflicts provide plausible evidence against the notion that there are exceptionless moral rules.5 Conflicts of duties, of course, are not just deficiencies regarding Criterion 1. They also indicate difficulties with Criterion 3. Like many moral theories, Kant’s system fails to provide an effective means of resolving major conflicts of duties. Some additional inconsistencies with our common moral judgments seem to arise from applications of the first version of the categorical imperative. Remember that the first version says that an action is permissible if everyone can consistently act on it and if you would be willing to have that happen. At first glance, it seems to guarantee that moral rules are universally fair. But it makes the acceptability of a moral rule depend largely on whether you personally are willing to live in a world that conforms to the rule. If you are not willing to live in such a world, then the rule fails the first version of the categorical imperative, and your conforming to the rule is wrong. But if you are the sort of person who would prefer such a world, then conforming to the rule would be morally permissible. This subjectivity in Kant’s theory could lead to the sanctioning of heinous acts of all kinds. Suppose the rule is “Kill everyone with dark skin” or “Murder all Jews.” Neither rule would be contradictory if universalized; everyone could consistently act on it. Moreover, if you were willing to have everyone act on it—even willing to be killed if you have dark skin or are a Jew—then acts endorsed by the rule would be permissible. Thus the first version seems to bless acts that are clearly immoral. Critics say that another difficulty with Kant’s theory concerns the phrasing of the maxims to be universalized. Oddly enough, Kant does not provide any guidance for how we should state a rule describing an action, an oversight that allows us to word a rule in many different ways. Consider, for example, our duty not to lie. We might state the relevant rule like this: “Lie only to avoid injury or death to others.” But we could also say “Lie only to avoid injury, death, or embarrassment to anyone who has green eyes and red hair” (a group that includes you and your relatives). Neither rule would lead to an inconsistency if everyone acted on it, so they both describe permissible actions. The second rule, though, is obviously not morally acceptable. More to the point, it shows that we could use the first version of the categorical imperative to sanction all sorts of immoral acts if we state the rule in enough detail. This result suggests not only a problem with Criterion 1 but also a limitation on the usefulness of the theory, a fault measured by Criterion 3. Judging the rightness of an action is close to impossible if the language of the relevant rule can change with the wind. It may be feasible to remedy some of the shortcomings of the first version of the categorical imperative by combining it with the second. Rules such as “Kill everyone with dark skin” or “Lie only to avoid injury, death, or embarrassment to anyone who has green eyes and red hair” would be unacceptable because they would allow people to be treated merely as a means. But the means-ends principle itself appears to be in need of modification. The main difficulty is that our duties not to use people merely as a means can conflict, and Kant provides no counsel on how to resolve such dilemmas. Say, for example, that hundreds of innocent people are enslaved inside a brutal Nazi concentration camp, and the only way we can free them is to kill the Nazis guarding the camp. We must therefore choose between allowing the prisoners to be used merely as a means by the Nazis or using the Nazis merely as a means by killing them to free the prisoners. Here is another example, a classic case from the philosopher C. D. Broad: Again, there seem to be cases in which you must either treat A or treat B, not as an end, but as a means. If we isolate a man who is a carrier of typhoid, we are treating him merely as a cause of infection to others. But, if we refuse to isolate him, we are treating other people merely as means to his comfort and culture. 6 Kant’s means-ends principle captures an important truth about the intrinsic value of persons. But we apparently cannot fully implement it, because sometimes we are forced to treat people merely as a means and not as an end in themselves.