Continue/Restart?

In an earlier post I distinguished between strong and weak immortalism, where the former says one long life is better than many short lives and the latter merely says one long life is better than one short life.

A population ethics paper called “Life Extension versus Replacement” by Gustaf Arrhenius considers and rejects two bad arguments for the strong-immortalist position (three bad arguments if you count “average utilitarianism”). All of these bad arguments modify “standard” total-utilitarian population ethics.

I think there are some better arguments for strong immortalism within standard utilitarianism. Maybe:

  1. …having a greater amount of existing structure to build on (memories and the like) allows for more valuable future experiences
  2. …old minds are living monuments that make the lives of others more interesting, like the seven wonders of the world
  3. …leading a more or less unique life is a part of welfare, and 100-year lives will repeat to an extent because there are so many more of them
  4. …any mind gets “tangled up” with others and with its own preferences, so that its destruction creates anti-valuable grief and thwarted aspirations
  5. …older minds attain more virtue and competence so they are more extrinsically useful

Arguments could go the other way, too. Maybe there’s a limited number of especially valuable life events you can experience — this question is related to what Eliezer Yudkowsky calls fun theory.

But even if those arguments are stronger than the ones I listed, that may not mean we shouldn’t research life extension in practice. Here are a few reasons:

  1. Option value. If living is a mistake you can still die later, if dying is a mistake you can’t come back.
  2. We aren’t actually at Earth’s population limits, so it’s not the case that by living for 1000 years you deny ten other potential people a 100-year life. Arguing for life extension currently requires only what I called “weak immortalism”, not “strong immortalism”.
  3. None of the arguments for death (other than bad ones, like “nature knows best”) would seem to single out 100 years as the right lifespan.
  4. None of the arguments for death would seem to argue for the complete destruction of minds, rather than for starting over only partially.

In sum, I would be shocked if a complete understanding of population ethics caused us to favor unextended human lives into the indefinite future.

Unlimitednesses to Virtual Reality

Rudy Rucker, in a post titled Fundamental Limits to Virtual Reality, argues that a virtual version of our planet, if it used the same computing resources, could never be as rich in phenomena. His main argument:

This is because there are no shortcuts for nature’s computations. Due to a property of the natural world that I call the “principle of natural unpredictability,” fully simulating a bunch of particles for a certain period of time requires a system using about the same number of particles for about the same length of time. Naturally occurring systems don’t allow for drastic shortcuts.

For details see The Lifebox, the Seashell and the Soul, or Stephen Wolfram’s revolutionary tome, A New Kind of Science—note that Wolfram prefers to use the phrase “computational irreducibility” instead of “natural unpredictability”.

Granted, a full simulation at the level of atoms or elementary particles would not be doable. But there’s no reason you need one. Vidar Hokstad nails it in the comments:

We can’t predict the arrangement of individual atoms in a large object. Why would a simulation even try? If someone do point an electron microscope at an object in the simulated world, the simulator can pick any random arrangement and we wouldn’t know better.

Rucker’s response:

The notion of leaving the details up to randomness is an interesting move. But maybe they aren’t random. Wolfram sometimes claims the whole kaboodle comes out of some, like, ten-bit rule that’s run for a really large number of cycles. Here’s the number of cycles that’s the thing that won’t fit on your desk.

When people talk about a substitute being “just as good,” I think of the Who song. [lyrics omitted]

But if all the stuff that Rucker shows on his photos — snow, fields, clouds, rocks — can be recreated qualitatively from humongously lossy statistical mechanics models and it’s only details like what you see through an electron microscope that have to be made up on the spot, doesn’t that already contradict his original point, which is that VR surroundings would look noticeably impoverished? It’s true that in a chaotic world, if you go to a lossy VR version, it will diverge pretty quickly from what it would have been. But then, in a chaotic world, the world diverges from what it would have been every time you blink.

Also, it seems like there should be some sort of principle that says it doesn’t take much more computing power to run a convincing virtual world for a mind to live in than it takes to run the mind itself. There’s only so much you can process in a second. I suppose that if you want a world to naturally factor huge numbers, computational complexity theory says that doing so takes much longer than it takes a mind in it to recognize that the numbers have indeed been factored. Most features of the world don’t seem to me to be like that, but my thinking here isn’t clear at the moment.

If I’m right and Rucker is wrong, the world takes up much less room in VR than will be available on future real-world computers. That means virtual worlds could be much bigger than Earth; it’s interesting to think about the implications if people lived there. In fact, if a there’s a not-too-expensive algorithm determining what a new piece of the world looks like, as well as how other pieces would have affected it until that time, and the algorithm gets run only as needed, then in a sense the world is infinite. (This doesn’t have any real function, and I’m not claiming people will choose to create this; just that they could if they wanted to.)

DH7: Why Good Argumentative Discourse Is Like a Bad Horror Movie

Earlier I recommended Paul Graham’s disagreement hierarchy. But it’s missing one level at the top.

When an argument is made, you learn about that argument. But often you also learn about arguments that could have been made, but weren’t. Sometimes those arguments work where the original argument doesn’t.

If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them.

To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.