Where In the World Am I?

An important concept in anthropic reasoning is “indexical uncertainty”. Where normal uncertainty is uncertainty about what the universe looks like, indexical uncertainty is uncertainty about where in the universe you are located.

I claim that all indexical uncertainty can be reduced to normal uncertainty plus multiple instantiation of observers. I don’t know if this is controversial, but it has interesting implications and it’s worth explaining.

Suppose a mad scientist makes two people, and puts each in a room. One room is blue on the outside, the other red. If I’m one of these people, I’m uncertain what the color of my room is. Because this concerns my place in the world, on the face of it, it’s a case of indexical uncertainty.

Now suppose first that the mad scientist didn’t particularly care about making our experiences exactly the same. Maybe I’m looking at the ceiling ten seconds after being created, and the person in the other room is looking at the floor. Or maybe I’m male and the person in the other room is female. Then my indexical uncertainty about the color of the room I’m in is really the same as my uncertainty about whether the mad scientist made the male in the blue room and the female in the red room, or vice versa. But this is normal uncertainty. It’s uncertainty about what the universe is like.

Then suppose, instead, that the mad scientist did make our mind states exactly the same. In that case, one possible way to see things — perhaps the only way — is that I have nothing to be uncertain about — it’s a certain fact that I’m in both rooms. If the mad scientist opens the doors, I should expect, as a certain fact, to diverge into two minds, one that sees blue and one that sees red.

So maybe we don’t need indexical uncertainty after all.

At this point I should say that I got confused… but maybe someone else can pick up the train of thought, so I’ll post this anyway. Eliminating indexical uncertainty should make it possible to think through the paradoxes of anthropic reasoning starting from principles we already understand.

Nanotech and the Burden of Proof

On an old post at Soft Machines, commenter Hal wrote:

One big problem I have arguing with pro-Drexlerites is the issue of “burden of proof”. They often argue, as Brian Wang does two posts above, that there are a lot of ideas that might work, and it is up to opponents to come up with exhaustive proofs that the technology will fail. I have tried to point out that given the radical and extraordinary claims for this technology (human immortality, and machines that build anything instantly that you wish for, among others), the burden of proof needs to be on the proponents of the technology to show that it will actually work. Until that is done the appropriate response is, we’ll see.

This is exactly wrong. When a technology is claimed to fulfill radical goals, it is worth checking whether that is indeed what the technology, if invented, would do. But if we are trying to find out whether we can invent the technology, by debating physical chemistry and the like, then radical consequences are irrelevant. Nature does not have a built-in anti-science-fiction censor.

Although there may well be reasons to be skeptical of Drexlerian nanotech, this is not one of them, and I suspect the error is common.