Why I Am Not a Utilitarian
In a recent online discussion, I said that I rejected utilitarianism as an ethical theory, and was asked why. This led to my writing a moderately long discussion of the subject, since I have fairly extensive disagreements on a number of different points. By request, I'm making them available to a wider set of readers.
My reasons for disagreeing with utilitarianism fall under three main heads:
On the first point, the basic principle of utilitarianism is that we should act to provide the most pleasure and the least pain for all sentient beings. This is often narrowed down to "humanity" or even "the citizens of our own country", but the founder of utilitarianism (or at least the philosopher who named it), Jeremy Bentham, extended it to any animal that's capable of feeling pain.
This embodies a very specific concept of ethics: that ethics is about the actions we perform that affect other people, and that its function is to tell us how to act toward other people. Morality, in this view, is the result of living in society, and being moral is the price we pay for living among other people; if we were solitary beings we would have no need for it.
An extreme, hard core utilitarian would say that all our actions ought to be based on what will be good for our society, humanity, or the world. I disagree with that for reasons I'll discuss below. But what about a moderate utilitarian, who thinks that we should judge our actions that affect other people — our work, our voting, our volunteering, our donations — by their effect on the world as a whole, but allows us a private sphere where we can do as we please?
The best you can say about this is that it gives us no guidance about our private actions. Classically, utilitarianism says that the yardstick is pleasure and pain, but doesn't attempt to analyze what causes us pleasure or pain; it just says, "You can tell them apart when you feel them." That is, in our private lives, utilitarianism tells us that we not only can be, but have to be, complete subjectivists, doing anything we feel like.
To a lot of people, it seems like a matter of course that ethics is socially created and enforced. But that's far from the only approach. Buddha's eightfold way, Lao Tzu's following the Tao, Socrates's examined life, Aristotle's eudaimonia (the happiness of a person without conflicts) and megalopsychia (greatness of soul), even Jesus's kingdom of heaven, are all goods that one person can achieve as an individual. They may have implications for how we act toward other people. But the good life is the life that leads to the good of the person choosing it, for all of them. And all of them have guidance to offer us if we individually want to lead good lives ... radically different guidance, often incompatible, but they all tell us that we can make better and worse private choices.
Utilitarianism is unsatisfactory because it's over-socialized. It shares the basic metaphor of Kant's deontological ethics (ethics of duty): That ethics is, in effect, a judge telling us what to do to obey a system of law or rules. Its basic idea is that of moral law. And I don't think that following rules is an adequate basis for achieving anything of worth in life. You need more than rules to play a game, or compose a song or a literary work, or run a business, or cure an illness, or prove a theorem; you need more than rules to create a human life that's worth living.
What I called extreme utilitarianism avoids this problem: It tells us that we should spend 100% of our time working for the good of the world as a whole. Anything we do that affects us privately is only justified by making us better able to serve the world; even play is only justified because it restores our readiness to work, and art because it inspires us to serve.
This is very close to altruism in the technical sense. Most people use "altruism" to mean compassion, kindness, or generosity — what Adam Smith called "the moral sentiments". But Auguste Comte, the philosopher who defined altruism, used the word to mean total and exclusive concern for other people's good, and total disregard of your own; he even said that Jesus's "love your neighbor as yourself" was morally evil because it conceded that people did love themselves and treated that as morally acceptable. Utilitarianism seems more moderate because it says that your own good counts too ... but if you're working for the good of six billion human beings, impartially, your one-six-billionth share isn't meaningfully different from zero. And both, again, are a lot like Kant's deontology: Kant said that if you gave your life for your friend, that was a pathological, nonmoral act because you cared if your friend lived or died; for it to be moral you had to act solely because you thought it was your duty.
These versions of morality basically treat your natural impulses as a living being, your emotions, your desires, as the enemy. You can never be a whole person living by them. And since I view ethics as the art of living well, I think that attaining wholeness is necessary to an ethical life, just as it is to any successful work of art.
On the second point, if utilitarianism isn't a satisfactory guide to living a good life as a human being, is it possibly a suitable guide to creating a good society — that is, a basis for translating ethics into policy, law, or institutions? A lot of economists think of it this way; utilitarianism is probably not merely the most commonly held basis for economists' policy recommendations, but the one that a large majority of them rely on.
Nonetheless, I find it unsatisfactory, for two reasons.
The older form of utilitarianism is one now called act utilitarianism. It proposes that to decide whether a specific act is right or wrong in a specific situation, we should judge the sum of its effects on the utility of everyone affected by it. The act that produces the greatest excess of pleasure over pain is the right act.
There's a basic problem with this: It's not a calculation we can ever actually perform.
First, the raw information we need to base it on is not all assembled in one place. There are six billion people on earth, each with many different priorities. Each of those people is also a productive input as a potential worker; and there are huge numbers of nonhuman inputs of many different kinds, existing in varied stocks all over the globe.
Second, turning the inputs into the outputs in the best way is analytically intractable. Oh, there are input-output matrices and methods for solving them. But they ruthlessly oversimplify the problem, working in terms of broad categories of inputs and outputs instead of individual goods and services. They also tend to assume linear equations, and most of the actual relationships are nonlinear. Even if we had all the data we couldn't solve the problem computationally.
And third, because any decisions we made would affect the outcomes people experienced, they'd have an incentive to game the system: to answer questions not as accurately as possible, but in the way that would skew the results in their favor.
(In fact, if you assume both act utilitarianism and the extreme application of utilitarian ethics to every human action — if you don't allow any private sphere — then you end up with something very similar to what used to be called socialism, or what comparative economists now call a "command economy". And as Mises showed in the 1930s, you can't make rational decisions in that situation.)
The other option is what's called rule utilitarianism, where you try to come up, not with ethical judgments that person A should get x and person B should get y, but with the most efficient possible set of rules for A and B to live by, so that if they follow the rules, on the average things will work out for the best. That seems like a worthwhile project, and economists and public choice theorists often try to evaluate rules that way.
The trouble is that the criterion for those rules is that they should try to make the sum of positive outcomes exceed the sum of negative outcomes as much as possible; that is, that they should benefit as many people as possible as much as possible, at the cost of harming as few people as possible as little as possible. But when you make that calculation, you're unavoidably saying that one person's injury can be justified by another person's benefit! That is, you're saying it's okay to hurt person A, not for A's own good in the long run, and not to punish A for doing something bad, but simply to make person B better off. Ursula Le Guin has a story about this, "The Ones Who Walk Away from Omelas", about an ideal, utopian, perfectly happy community, whose happiness can only continue if one small child is kept penned up in a basement, in misery and want, never to share other people's happiness. And logically, the utilitarian has to say that if hurting one person would bring greater benefit to another, or would bring small benefits to a lot of others, we not only can inflict that hurt but are morally obligated to do it. I find that conclusion pathological.
And the practical implementation has undesirable consequences. How do you get A to endure being made worse off for B's benefit? The only reliable method is to say, "Do this, or we'll inflict an even greater loss on you as a penalty." That is, you have to rely on threats and compulsion — or on having the government compel and threaten people for the benefit of other people who can muster enough votes to gain its favor.
Directly, threats and compulsion are undesirable in two ways. On one hand, being able to mount a credible threat uses up resources, which then aren't available to produce something that would make people better off. On the other, the very fact of being the target of a threat is a negative experience: it makes the victim of the threat worse off.
But the long-term incentive effects are worse. The people who have assets that could benefit other people become targets for people trying to take away those assets through compulsion; this makes work and saving and innovation less desirable. And on the other hand, when people learn that they can profit by getting government to favor them — when, for example, businesses learn that instead of producing and investing, they can earn returns by having government subsidize them (with tax money), or seize property for them (by eminent domain), or suppress their competitors (through biased regulation or direct monopoly grants) — they'll find themselves rewarded for turning away from useful work to pursuing political influence. And then there will be more people seeking gain from the political process. The whole thing looks alarmingly like an addiction ... and utilitarianism tends to say, "The first hit is free, little girl."
We can't do without compulsion entirely. If nothing else, we need it to protect ourselves against other people's resort to compulsion and threats. But we're better off using it as little as possible. And utilitarianism doesn't restrict its use strongly enough.
And, yes, I know that there are people who think that you can't possibly have an organized society without harming some people for other's benefit: that someone has to be the sacrificial victim. I don't believe that. There's a rival ethical theory, the harmony of rightly understood interests, that says that if you really understand what's in your own interest, you'll realize that you can't gain by inflicting losses on other people — by being a criminal, a slaveowner, or a dictator, for example. That's the ethical theory I personally live by; and I reject utilitarianism because it assumes that conflict of interests is inescapable.
Last, aside from the substantive ethical and political concerns that I've discussed, I have some methodological issues with utilitarianism. I'm not persuaded that the idea of "utility" even makes sense in the first place.
To begin with, utility is supposed to measure the intensity of feelings of some sort: the intensity of need or desire, the balance of pleasure over pain, the capacity of a thing to relieve felt dissatisfaction. And yet utilitarianism is all about maximizing utility for the world, or humanity, or a particular society as a whole. To do that, you have to be able to add up different people's utility levels. But where's the common yardstick to apply to the intensity of everyone's feelings or desires? How can you look inside my mind, and say that the enjoyment I get from having an extra ten dollars is greater than the loss of enjoyment someone else suffers from having ten dollars taken away?
Oh, I understand that some econometricians have come up with statistical tricks that claim to measure this. But any such "measurement" is vulnerable to problems: most basically, if you have a procedure that claims to measure how much someone cares about things, and caring more will give them a bigger share of those things, they have an incentive to game the system. Beyond that, what you think you value when you're just considering the question abstractly is often not a good representation of what you actually value when you have to spend your own money, or your own time, or your own effort to get it.
Some economists do an end run around this question by shifting from utility maximization to wealth maximization. But that amounts to abandoning the basic drive behind utilitarianism: it gives the rich more votes than the poor, in effect saying that if you can spend ten times as much money on something then it has ten times the utility. I don't have a problem with this myself, but I think most utilitarians would.
Beyond that, is utility well defined even for one person?
Supposedly, utility is a scale that measures how much value something has for you. So if you have to choose between A and B, the choice is simple: You pick the one that has more utility. Every choice is just a table lookup, on the pre-computed internal table of your values. Except that it isn't. Practically everyone has experienced real inner conflict: Valuing A, valuing B, not knowing which you value more, not even being sure how to figure it out. Do you stay with your spouse or pursue a new potential lover? Do you take a job in another country or keep working in your home town where you know people? Do you cover up for a friend who's done something wrong, or tell the truth? Do you go out and have fun, or study for an examination? If we had an internally defined utility function, we could resolve such conflicts effortlessly, and literature and drama would be very different.
Yes, we do make choices, and what we choose is what we value more. But we don't know that in advance. When we have to make a choice, we have to find a common scale of value to measure the options on. When we have to make a hard, agonizing choice, it's because we don't know how to find that common scale of value. And it's possible to think through our values, and to put them on a common scale — this is part of the "examined life" that Socrates talked about — but it's hard work, and often painful, and most people don't do it. And even a person with both deep insight into their own motives and a brilliant philosophical mind may face a choice that they haven't thought through ahead of time.
But if it's rare for people to have unified scales of values, then it's rare for any one person's utilities to be knowable in the first place, even before we deal with the problem of how to add them up.
And finally, utility is supposed to be maximized by maximizing pleasure and minimizing pain. There's a joker in that deck: the assumption that the same course of action can do both. Why should we even suppose that that's true?
On one hand, Buddha looked at human life, and saw that it was full of suffering. And wanting to attain freedom from suffering — a kind of utilitarianism — he came up with the idea that we suffered because we had desires. But everything that gives us pleasure makes us feel desire and attachment. So Buddhism encourages us to train ourselves not to need or want pleasure, because the price of pleasure is pain: to minimize pleasure and pain alike.
The Greek philosopher Epicurus had similar, but more modest ideas: He thought that pleasure came from relieving pain, want, or dissatisfaction, and that being in pain, want, or dissatisfaction was bad. So he advised his students to lead lives of small pleasures that they would not suffer much from lacking. He didn't want to extinguish desire outright, but he did want to turn the intensity down as far as he could.
Then on the other hand, there was Nietzsche, who thought that great desires and great passions were admirable and that we should try to live by them — and that this would mean suffering and tragedy, and we should accept that willingly. His ultimate theory was that tragedy is an affirmation of reality and life: that the tragic poet tells us that if you really love something you'll think it's worth whatever price you have to pay for it — even, if necessary, your own life.
Mathematically, you can maximize a single variable, or a function that generates a single variable. You can't maximize many variables simultaneously. Before you can maximize, you have to reduce them to a common scale, by figuring out a rate at which they trade off against each other ... and there's always more than one possible rate, even for just two variables! Utilitarianism seems to me like the old joke about the physicist, the chemist, and the economist on a desert island trying to figure out how to open their cans of food: the physicist and the chemist come up with elaborate technological schemes, but the economist says, — First, let's assume a can opener ..." The set of priorities that we can use to work out our own personal scales of values are the problem in ethics that each of us has to solve to live well, and in talking about "utility", utilitarians are simply assuming a solution to that problem.
Which, in a way, is like the "economic calculation" argument I mentioned earlier, but applied not to a society, but to the internal economies of our own minds. If we had a predefined scale of values for society we could make all economic decisions simply; if we had a predefined scale of utility for our own minds we could make all personal choices simply. But neither sort of knowledge exists ready-made; we have to discover them both.
© 2010 William H. Stoddard
Utopias at Troynovant