Aumann And Stability

Aumann’s agreement theorem, which I’ve mentioned here a few times before and which has informed many discussions on Overcoming Bias, states:

If two (or more) Bayesian agents are uncertain about some proposition, and their probabilities are common knowledge, and they have the same priors, then their probabilities must be equal, even if they have different information.

Concretely, as agents trade probability estimates back and forth, conditioning each time on the other’s estimates, probabilities will tend to the same number.

This is a surprising result that has often been summarized as saying “rational agents cannot agree to disagree”. I think there are some problems with applying the theorem this way in practice that haven’t been highlighted enough.

Continue reading

Ethics And Belief

In The Wrong Tail, I discussed a moral reason to promote true beliefs. There’s also an ethical reason to promote true beliefs. Manipulate people (including yourself) in pursuit of your goals, and others will manipulate people (including themselves) in pursuit of theirs.

By the way: there is a game-theoretical reason to speak the literal truth even in those cases where it does not promote maximally accurate beliefs. Doing otherwise creates room for those who truly are dishonest, or who overestimate their ability to “lie the truth”, to operate in. In general, there’s a game-theoretical reason not to do things that are usually wrong, even when they happen to be right, so long as it’s hard for others to check when they’re right.

Of course, “there’s a reason for X” by no means implies “we should do X”.

Ethics

On one conception,

  • “morality” is about doing what is right according to some moral system that you agree with
  • “ethics” is a set of constraints on actions, applying even if those actions seem to be moral

I’d say ethics tells us that where possible, we should make our actions compatible with a wide variety of moralities that one could reasonably hold, as well as a wide variety of moralities that people actually hold.

There are at least two reasons for this:

  1. Those moralities that people could reasonably hold, might actually be correct.
  2. Not going against those moralities that people do hold, and cultivating the disposition not to do so, amounts to cooperating in a repeated prisoner’s dilemma.

“Ethics” is also the name of the subfield of philosophy that studies this sort of thing, but there’s no reason to let that confuse us.

Truthiness

We can see the world as a great web of facts. Suppose we cut out one node, and let something grow back by doing probabilistic inference from surrounding nodes.

Truth is the thing that’s there. Truthiness is the thing that grows back.

Often, we pay too much attention to truth and too little attention to truthiness.

A random person finds an alien device beside the road. With equal probabilities, if you press the button, it kills ten thousand people, saves ten thousand people’s lives, or does nothing. Our random person presses the button. Depending on what happens, history will remember him as a great villain, a great hero, or just a random person. Only that last opinion is correct.

An Anti-Pascalian Intuition

Let X be something important, like going to heaven. Let Y be something less important, like one day’s worth of peace of mind.

Intuition: for all sufficiently high levels of utility U, the probability that for some reason Y has at least U importance is at least 1/10^30 times the probability that X has at least U importance.

This is iffy in several different ways, but the iffinesses in the solution might be the same iffinesses in the problem.

(What do these absolute utility levels even mean? Probably nothing, but on the other hand it seems you need something like them if you want to do anything like maximize expected utility over different ethical systems.)

Relevance Isn’t Transitive

… for the same reason that probabilistic correlation isn’t transitive.

Whether Japan would have surrendered without nuclear weapons in WW2 is relevant to how we should think about nuclear weapons and morality. How likely the Emperor’s dog was to suddenly die on Aug 10, 1945 is relevant to whether Japan would have surrendered without nuclear weapons in WW2. But how likely the Emperor’s dog was to suddenly die on Aug 10, 1945 is not relevant to how we should think about nuclear weapons and morality. Not even a little bit.

If we looked closely, what percentage of the internet would we find is devoted to debating the health of the Emperor’s dog?

Don’t Be So Arrow-Minded

Arrow’s Theorem states that no voting procedure can satisfy a certain set of “fair” axioms. The rhetoric goes as follows:

The procedure should deterministically translate an input consisting of everyone’s preference orderings, to an output, which we take as a collective preference ordering. Pretty reasonable, right?

The procedure should not just copy a single voter’s preference ordering. Pretty reasonable, right?

If everyone prefers A to B, the procedure should prefer A to B. Pretty reasonable, right?

If we add a new candidate, specifying for each voter where that candidate stands in the preference ordering, then this should not affect the relative order of the other candidates. Pretty reasonable, right?

Not right! Not right!

This last axiom is called “independence of irrelevant alternatives” (IIA). And indeed, the alternatives themselves are irrelevant. If you change me from a water-lover to someone indifferent about water, but keep my attitudes toward tea and coffee the same, that shouldn’t change whether the collective prefers tea to coffee.

But if, knowing that I prefer tea to coffee, you learn that I prefer tea to water and water to coffee, this still informs you about the strength of my preference for tea. And it’s only reasonable for this preference strength to make the collective prefer tea in close situations.

Whether Fred has the same DNA as the serial killer the police is looking for is not relevant in the sense that if you changed Fred’s DNA, you’d change Fred’s guilt. But it is relevant in the sense that learning about Fred’s DNA should change your opinion of his guilt. The second “I” in “IIA” denies only the first kind of relevance.

If IIA doesn’t matter, then it’s not clear Arrow’s theorem says anything practically interesting. It’s true that a perfect voting system is impossible. But that’s because there are always incentives to lie about preferences, leading to strategic voting. The Gibbard-Satterthwaite theorem says this applies to all non-dictatorial deterministic systems.

But now it turns out the two theorems are actually the same thing! Huh. I don’t understand this on an intuitive level. It undermines the whole reason for me writing this, but I’ll post anyway.

(PS: Some of you may claim all talk of the “strength” of preferences is nonsense. This is nuts. Arrow allows you to communicate only the ordering, but in the real world preferences clearly do have different strengths; if necessary you can give them an operational definition by looking at preferences over lotteries, or by comparing to other questions that you’re not voting on. This viewpoint doesn’t even allow you to say that if you prefer A > B > C, your preference A > C is stronger than A > B. And if preferences don’t have strengths, then it’s not the case that adding irrelevant alternatives leaves preference strengths unchanged.)

(PPS: To the extent that people’s true interests aren’t what they think, the problem becomes one of finding the right answer and not so much one of fairness.)