Causal Bottlenecks

Earth-2009 is at a causal bottleneck in the sense that decisions taken now can influence those features of the far future that we consider important, most obviously but not exclusively by causing or preventing the extinction of civilization.

The point I want to make in this post is that, purely as a matter of math, this has to be an extremely unusual state of affairs.

If decisions taken in the 21st century account for 20% of the variance in the value (measured in whatever way) of the universe in 1 million AD, then it cannot be the case that decisions taken in the 22nd through 30th century also each account for 20% of this variance.

A general argument along those lines shows that, for nearly all time periods, if you plot the correlations of the decisions taken in that time period with the values of the universe at times in that period’s future, you’ll get a graph that converges to zero. The past has been an exception. Future people will still be able to influence their future; but the consequences of their actions on the global state of things will be slowly papered over with decisions taken in the future’s future. This will make more emotional sense to humans; common-sense morality assumes people nearby in time are most of what matters.

I’ve made some assumptions here. One of them is that there’s a chance we’ll one day be able to get existential risk down to a negligible level. I believe this to be true, but if not, forget what I said about influencing the long-term future; there will not be one. Another assumption is no boundless growth. With boundless growth, some of the conclusions here are worth re-assessing (does total value increase faster than correlations die out?), but one conclusion that stands is decisions entangled with a fixed fraction of the future’s value — such as those relating to existential risk — are both unique and uniquely important.

As a final worry, if causal bottlenecks are as rare as this argument shows, do we in fact have enough information to overcome the low prior probability that we are in one?

Quantum Versus Doom

Assumptions:

  1. The many-worlds interpretation of quantum mechanics is correct
  2. If so, in a nontrivial fraction of worlds we go on to ancestorize huge numbers of people
  3. The reasoning behind the doomsday argument is correct: if the future (all worlds combined, weighted by quantum measure) contains orders of magnitude more people than the past and present, we should be surprised to live so early

Together, these assumptions produce surprise. (Ancestor simulations might affect the reasoning here, but only, it seems to me, if they’re a significant fraction of the universe’s computing resources.)

If I said, “we have strong independent evidence for 1 and 2, so 3 has to be wrong”, would I be blending science and philosophy in a legitimate way? Or is the truth of 3 something we can only find out by a priori reasoning? I believe that the blending here is legitimate, and I already believed 3 to be false anyway, but making this argument still leaves me with an uncomfortable feeling.

Quantum Immortality: First Salvo

Jacques Mallah has put a paper on the ArXiv arguing against the theory of “quantum immortality” (which says that as long as there remains at least one quantum world with a copy of you in it, you should expect to stay alive) and the related idea of “quantum suicide” (which says that, since you’re “quantum immortal” anyway, you might as well correlate your death with things like not winning the lottery). I’m not yet sure I’d endorse the whole paper, but like Mallah, I believe in the many-worlds interpretation of quantum mechanics, and like Mallah, I do not believe that it implies quantum immortality. I promised long ago to write a post against the idea, but I didn’t get around to it, so I’ll take this opportunity to expand on something Mallah says and sort of get things started:

Continue reading

Where In the World Am I?

An important concept in anthropic reasoning is “indexical uncertainty”. Where normal uncertainty is uncertainty about what the universe looks like, indexical uncertainty is uncertainty about where in the universe you are located.

I claim that all indexical uncertainty can be reduced to normal uncertainty plus multiple instantiation of observers. I don’t know if this is controversial, but it has interesting implications and it’s worth explaining.

Suppose a mad scientist makes two people, and puts each in a room. One room is blue on the outside, the other red. If I’m one of these people, I’m uncertain what the color of my room is. Because this concerns my place in the world, on the face of it, it’s a case of indexical uncertainty.

Now suppose first that the mad scientist didn’t particularly care about making our experiences exactly the same. Maybe I’m looking at the ceiling ten seconds after being created, and the person in the other room is looking at the floor. Or maybe I’m male and the person in the other room is female. Then my indexical uncertainty about the color of the room I’m in is really the same as my uncertainty about whether the mad scientist made the male in the blue room and the female in the red room, or vice versa. But this is normal uncertainty. It’s uncertainty about what the universe is like.

Then suppose, instead, that the mad scientist did make our mind states exactly the same. In that case, one possible way to see things — perhaps the only way — is that I have nothing to be uncertain about — it’s a certain fact that I’m in both rooms. If the mad scientist opens the doors, I should expect, as a certain fact, to diverge into two minds, one that sees blue and one that sees red.

So maybe we don’t need indexical uncertainty after all.

At this point I should say that I got confused… but maybe someone else can pick up the train of thought, so I’ll post this anyway. Eliminating indexical uncertainty should make it possible to think through the paradoxes of anthropic reasoning starting from principles we already understand.

Do Simulations Matter?

The Simulation Argument, formulated by Nick Bostrom, aims to show that you’re probably inside a simulated world. It assumes that enough civilizations like ours go on to spawn posthuman descendants that create many such worlds, and that enough of those worlds are like Earth. In all of spacetime, simulated versions of our civilization then outnumber originals. (Note that the Simulation Argument is not the same thing as the conclusion that you’re in a simulation.)

I think the argument and assumptions could hold up. If so, what does that mean we should do? Robin Hanson has made some suggestions. It seems to me, though, that (to a first approximation) the possibility of being in a simulation should make no difference to the behavior of a non-egoist agent. Here’s a quick informal argument.

Imagine you’re in a huge tree. It’s foggy, so you can’t tell whether you’re at the trunk or at one of a subset of the tree’s (sub-)branches. There are many branches and only one trunk, so you can assume you’re probably at a branch. You feel a strange urge to apply chemicals to the wood, and have two choices. One chemical, BranchKiller, is deadly to branches but not the main trunk; the other chemical, TrunkKiller, is deadly to the main trunk but not the branches. Assume you like the tree and want to save as much of it as possible.

In this situation, you should clearly apply BranchKiller, not TrunkKiller. Since you’re probably at a branch, it’s true that BranchKiller is more likely to harm the tree. But if you are at the trunk, the effects will spread to all branches. If you had some clones, one at each trunk or branch you think you might find yourself, then using TrunkKiller would always kill the entire tree, and using BranchKiller would always kill only part of the tree.

Now imagine the tree is the universe, the trunk is base-level reality, and the branches are simulated worlds. It’s possible that people in simulated worlds could do things to affect their fate that wouldn’t work in base-level reality. But for every decision made in a simulated world, base-level reality trumps that decision. We in base-level reality get to decide (if only very indirectly) what universes, if any, are created, and whether they can be influenced post-creation. Making sacrifices in base-level reality for gain in simulated worlds is like killing the tree’s trunk to save its branches. Unless we can very reliably help simulated worlds at very little cost in base-level reality, it seems to me we can just ignore the simulation issue entirely.

Update: see the comments for some corrections and clarifications.

Anthropic Reasoning

Observational selection effects are biases created when different hypotheses make different predictions on the existence of observers. For example, you could argue that just from the fact that our solar system has an Earthlike planet in it, we can’t conclude solar systems with Earthlike planets are typical; if observers evolve only on such planets, then that is what they’ll observe no matter what a typical system looks like.

Some people have taken to calling the study of how to deal with these effects “anthropics”. There’s a lot of subtle philosophy that goes into this, and I have yet to find any one account that I think gets it all right. I may do a post with actual content later, but here are some (mutually inconsistent) works I found particularly enlightening. If you combined some of the ideas here you might end up with something good: