An important concept in anthropic reasoning is “indexical uncertainty”. Where normal uncertainty is uncertainty about what the universe looks like, indexical uncertainty is uncertainty about where in the universe you are located.
I claim that all indexical uncertainty can be reduced to normal uncertainty plus multiple instantiation of observers. I don’t know if this is controversial, but it has interesting implications and it’s worth explaining.
Suppose a mad scientist makes two people, and puts each in a room. One room is blue on the outside, the other red. If I’m one of these people, I’m uncertain what the color of my room is. Because this concerns my place in the world, on the face of it, it’s a case of indexical uncertainty.
Now suppose first that the mad scientist didn’t particularly care about making our experiences exactly the same. Maybe I’m looking at the ceiling ten seconds after being created, and the person in the other room is looking at the floor. Or maybe I’m male and the person in the other room is female. Then my indexical uncertainty about the color of the room I’m in is really the same as my uncertainty about whether the mad scientist made the male in the blue room and the female in the red room, or vice versa. But this is normal uncertainty. It’s uncertainty about what the universe is like.
Then suppose, instead, that the mad scientist did make our mind states exactly the same. In that case, one possible way to see things — perhaps the only way — is that I have nothing to be uncertain about — it’s a certain fact that I’m in both rooms. If the mad scientist opens the doors, I should expect, as a certain fact, to diverge into two minds, one that sees blue and one that sees red.
So maybe we don’t need indexical uncertainty after all.
At this point I should say that I got confused… but maybe someone else can pick up the train of thought, so I’ll post this anyway. Eliminating indexical uncertainty should make it possible to think through the paradoxes of anthropic reasoning starting from principles we already understand.