Don’t Be So Arrow-Minded

Arrow’s Theorem states that no voting procedure can satisfy a certain set of “fair” axioms. The rhetoric goes as follows:

The procedure should deterministically translate an input consisting of everyone’s preference orderings, to an output, which we take as a collective preference ordering. Pretty reasonable, right?

The procedure should not just copy a single voter’s preference ordering. Pretty reasonable, right?

If everyone prefers A to B, the procedure should prefer A to B. Pretty reasonable, right?

If we add a new candidate, specifying for each voter where that candidate stands in the preference ordering, then this should not affect the relative order of the other candidates. Pretty reasonable, right?

Not right! Not right!

This last axiom is called “independence of irrelevant alternatives” (IIA). And indeed, the alternatives themselves are irrelevant. If you change me from a water-lover to someone indifferent about water, but keep my attitudes toward tea and coffee the same, that shouldn’t change whether the collective prefers tea to coffee.

But if, knowing that I prefer tea to coffee, you learn that I prefer tea to water and water to coffee, this still informs you about the strength of my preference for tea. And it’s only reasonable for this preference strength to make the collective prefer tea in close situations.

Whether Fred has the same DNA as the serial killer the police is looking for is not relevant in the sense that if you changed Fred’s DNA, you’d change Fred’s guilt. But it is relevant in the sense that learning about Fred’s DNA should change your opinion of his guilt. The second “I” in “IIA” denies only the first kind of relevance.

If IIA doesn’t matter, then it’s not clear Arrow’s theorem says anything practically interesting. It’s true that a perfect voting system is impossible. But that’s because there are always incentives to lie about preferences, leading to strategic voting. The Gibbard-Satterthwaite theorem says this applies to all non-dictatorial deterministic systems.

But now it turns out the two theorems are actually the same thing! Huh. I don’t understand this on an intuitive level. It undermines the whole reason for me writing this, but I’ll post anyway.

(PS: Some of you may claim all talk of the “strength” of preferences is nonsense. This is nuts. Arrow allows you to communicate only the ordering, but in the real world preferences clearly do have different strengths; if necessary you can give them an operational definition by looking at preferences over lotteries, or by comparing to other questions that you’re not voting on. This viewpoint doesn’t even allow you to say that if you prefer A > B > C, your preference A > C is stronger than A > B. And if preferences don’t have strengths, then it’s not the case that adding irrelevant alternatives leaves preference strengths unchanged.)

(PPS: To the extent that people’s true interests aren’t what they think, the problem becomes one of finding the right answer and not so much one of fairness.)

When The Revolution Comes, You Will Be First Up Against The Margin

This Marginal Revolution post discusses carbon taxes, people who are for or against “a carbon tax”, whether “a carbon tax” is a good thing, and so on. But even though both are “a carbon tax”, the difference between a $100 carbon tax and a $10 carbon tax is much bigger than the difference between a $10 carbon tax and no carbon tax. It’s as if people think dollars come in two amounts, “none” and “some”. I doubt that this is true of Tyler Cowen, so maybe there’s something going on that I’m missing.

Utility

A lot of people get confused over different uses of the word “utility”. Here’s my understanding.

Decision-theoretical agents have preferences over states of the world. You can assign a number to each state such that the agent always prefers the state with the highest number. This number is called “utility”, and the agent a “utility maximizer”. If an agent is uncertain about the consequences of different actions, you can show that under reasonable assumptions the agent assigns utilities to individual outcomes, and prefers the action after which the expected value of these utilities is the highest. This is called the “Expected Utility Theorem”, and an agent to which it applies is an “expected utility maximizer”.

This formalism is used both in economics and in ethics.

Economics (of the neoclassical kind) models consumers and other economic actors as such utility maximizers. These models are imperfect, and there’s a lot of controversy over their usefulness which this post will not comment on. What’s clear is that utility, in the economic context, is not happiness. Happiness is probably related to utility in some way, but they’re not the same thing. Utility is not something you can experience. It’s just a mathematical construct used to describe the optimization structure in your behavior. Each agent cares only about his own utilities, so there’s no point in adding utilities across different agents, either.

Consequentialist ethics says an act is right if its consequences are good. Moral behavior here amounts to being a utility maximizer. What’s “utility”? It’s whatever a moral agent is supposed to strive toward. Bentham’s original utilitarianism said utility was pleasure minus pain; nowadays any consequentalist theory tends to be called “utilitarian” if it says you should maximize some measure of welfare, summed over all individuals. Summing utilities is a necessary part of these theories, and the way to do it will either be obvious (like if utility is the number of bananas you eat), or will have to be specified by the theory (like if utility is the degree to which you achieve your goals). Take note: not all utility maximizers are utilitarians.

There’s no necessary connection between these two kinds of utility other than that they use the same math. It’s possible to make up a utilitarian theory where ethical utility is the sum of everyone’s economic utility (calibrated somehow), but this is just one of many possibilities. Anyone trying to reason about one kind of utility through the other is on shaky ground.

Often it’s tempting to interpret economic theory as assuming people strive after happiness, or people who achieve their goals are happier than people who don’t, or people achieving their goals is a good thing, or people achieving their goals is the only good thing, or rational people are egoists. Some economists say these things. Economics doesn’t.

It’s not quite true that economics says nothing about utility in the ethicist’s sense. There’s a branch of economics called “welfare economics” that does. For example, the economist John Harsanyi proved that if you’re behind a veil of ignorance, you must accept utilitarianism, which says the best policy is the one that maximizes the sum of everyone’s individual utility. But such a proof makes some explicit assumptions. One of these is “Pareto efficiency”, the idea that if doing something makes nobody worse off in terms of preferences and at least one person better off, it should be done. That sounds reasonable, but it does assume that nothing matters other than preference fulfillment. This is not an obvious ethical truth.

With this much opportunity for confusion, when people mention “utility”, it’s a good idea to make sure you (and they) know exactly how they’re using the word.

Jeremy Bentham's HEAD!