Creature Or Technology

When people think about artificial intelligence — not things like chess computers, but the full-blown human-equivalent kind that doesn’t currently exist — they often think of each AI design as an individual creature.

A more enlightening way to see things is to consider each AI design as a technology, a way to arrange matter to produce some desired result. Just like the technology called “the wheel” is a way to turn a thing that just sits there into a thing that rolls, an AI design is a way to turn any sufficiently powerful computer into a thing competent at figuring out truths and motivated toward achieving certain goals.

Most science fiction is misleading on this point. It’s as if stone age people had stories about the future where one of them invented a Wheel and another of them invented another Wheel and another really smart one invented two different Wheels, so they could put their life’s work together and build a cart and have it go on wondrous adventures through an otherwise wheelless world.

Once we’ve built our HAL-9000 or Data or KITT or GSV Ravished By The Sheer Implausibility Of That Last Statement (leaving aside fictional AI realism problems), it doesn’t just mean we now have a novel individual mind, or a small group of such minds representing different versions. It means we have once and for all discovered the secret of transmuting stupid into smart.

Second Nature

Anne C. at Existence is Wonderful writes:

A big part of what I find wonderful about existence has to do with being able to look at things in my own way, without anyone telling me what I am supposed to be seeing, or what is “important” about the environment I inhabit.

That is one huge thing I see missing in exhortations of the delights of VR — the acknowledgment that no matter how well-rendered, it is still going to be a case of someone else designing everything you experience.

I deny that, for sufficiently advanced VR, the thing to be acknowledged is true. This ties in to a “fun theory” speculation I’ve been meaning to write for some time now.

I can see far-future people, whether “real” or “virtual”, inhabiting three different kinds of places:

  1. Places designed by people.
  2. Places designed by something else.
  3. Places not designed by anything, created by mechanically unfolding a limited amount of seed complexity.

Continue reading

Post-Apocalyptic Puzzles

Consider those future events that destroy most, but not all of human civilization; maybe we can call them “Oomsday”. For example, a global pandemic in which everyone outside Madagascar dies.

How can we make sure that if we fail to prevent such an event from happening, survivors manage to bounce back to a good outcome in the long term?

The obvious approach is to send information to the future in apocalypse-proof time capsules. But these are rather crude tools.

As designers of computer-based time capsules, we can exercise a far more fine-tuned control over the future than by just telling them what we know. Through passwords, timer mechanisms, and “security through obscurity”, we can choose the order in which pieces of knowledge become available. We can make sure a piece of knowledge is only available to people who already understand a different piece of knowledge, and perhaps even turn this into some sort of incentive structure. Or we can try to make some pieces of knowledge available only to people with certain interests. (Caricatured example: in a huge database, file information on nuclear physics under “how to be a good guy and help people”.)

In reality, information will probably not be lost so completely and cleanly. But perhaps with these ideas in mind, we can better help post-apocalyptic people make the best of a horrible situation.

Causal Bottlenecks

Earth-2009 is at a causal bottleneck in the sense that decisions taken now can influence those features of the far future that we consider important, most obviously but not exclusively by causing or preventing the extinction of civilization.

The point I want to make in this post is that, purely as a matter of math, this has to be an extremely unusual state of affairs.

If decisions taken in the 21st century account for 20% of the variance in the value (measured in whatever way) of the universe in 1 million AD, then it cannot be the case that decisions taken in the 22nd through 30th century also each account for 20% of this variance.

A general argument along those lines shows that, for nearly all time periods, if you plot the correlations of the decisions taken in that time period with the values of the universe at times in that period’s future, you’ll get a graph that converges to zero. The past has been an exception. Future people will still be able to influence their future; but the consequences of their actions on the global state of things will be slowly papered over with decisions taken in the future’s future. This will make more emotional sense to humans; common-sense morality assumes people nearby in time are most of what matters.

I’ve made some assumptions here. One of them is that there’s a chance we’ll one day be able to get existential risk down to a negligible level. I believe this to be true, but if not, forget what I said about influencing the long-term future; there will not be one. Another assumption is no boundless growth. With boundless growth, some of the conclusions here are worth re-assessing (does total value increase faster than correlations die out?), but one conclusion that stands is decisions entangled with a fixed fraction of the future’s value — such as those relating to existential risk — are both unique and uniquely important.

As a final worry, if causal bottlenecks are as rare as this argument shows, do we in fact have enough information to overcome the low prior probability that we are in one?

Civilizational Defense

Brian Wang has some interesting posts up about civil defense, i.e. making people and buildings more disaster-proof. For really big but not existential risks, refuges might help. But what I’m most interested in is how to preserve our “civilization” or “wisdom”. By that I don’t necessarily mean art or even science, but those ideas and institutions that would prevent a post-apocalyptic society from sliding into the sort of dystopia you see in fiction, and that would allow us to rebound gracefully to a good long-term outcome. A “Handbook for Apocalypse Survivors” would be nice, but what would you put in it?

Unlimitednesses to Virtual Reality

Rudy Rucker, in a post titled Fundamental Limits to Virtual Reality, argues that a virtual version of our planet, if it used the same computing resources, could never be as rich in phenomena. His main argument:

This is because there are no shortcuts for nature’s computations. Due to a property of the natural world that I call the “principle of natural unpredictability,” fully simulating a bunch of particles for a certain period of time requires a system using about the same number of particles for about the same length of time. Naturally occurring systems don’t allow for drastic shortcuts.

For details see The Lifebox, the Seashell and the Soul, or Stephen Wolfram’s revolutionary tome, A New Kind of Science—note that Wolfram prefers to use the phrase “computational irreducibility” instead of “natural unpredictability”.

Granted, a full simulation at the level of atoms or elementary particles would not be doable. But there’s no reason you need one. Vidar Hokstad nails it in the comments:

We can’t predict the arrangement of individual atoms in a large object. Why would a simulation even try? If someone do point an electron microscope at an object in the simulated world, the simulator can pick any random arrangement and we wouldn’t know better.

Rucker’s response:

The notion of leaving the details up to randomness is an interesting move. But maybe they aren’t random. Wolfram sometimes claims the whole kaboodle comes out of some, like, ten-bit rule that’s run for a really large number of cycles. Here’s the number of cycles that’s the thing that won’t fit on your desk.

When people talk about a substitute being “just as good,” I think of the Who song. [lyrics omitted]

But if all the stuff that Rucker shows on his photos — snow, fields, clouds, rocks — can be recreated qualitatively from humongously lossy statistical mechanics models and it’s only details like what you see through an electron microscope that have to be made up on the spot, doesn’t that already contradict his original point, which is that VR surroundings would look noticeably impoverished? It’s true that in a chaotic world, if you go to a lossy VR version, it will diverge pretty quickly from what it would have been. But then, in a chaotic world, the world diverges from what it would have been every time you blink.

Also, it seems like there should be some sort of principle that says it doesn’t take much more computing power to run a convincing virtual world for a mind to live in than it takes to run the mind itself. There’s only so much you can process in a second. I suppose that if you want a world to naturally factor huge numbers, computational complexity theory says that doing so takes much longer than it takes a mind in it to recognize that the numbers have indeed been factored. Most features of the world don’t seem to me to be like that, but my thinking here isn’t clear at the moment.

If I’m right and Rucker is wrong, the world takes up much less room in VR than will be available on future real-world computers. That means virtual worlds could be much bigger than Earth; it’s interesting to think about the implications if people lived there. In fact, if a there’s a not-too-expensive algorithm determining what a new piece of the world looks like, as well as how other pieces would have affected it until that time, and the algorithm gets run only as needed, then in a sense the world is infinite. (This doesn’t have any real function, and I’m not claiming people will choose to create this; just that they could if they wanted to.)

Always 40 Years Away

An earlier post pointing to a Nick Bostrom paper led to a discussion on the observation (true or false) that the predicted date of a technological singularity has been receding by one year per year, so that it’s always the same number of years away. Rolf Nelson wrote:

[T]his is a common objection that I’ve never understood. What hypothesis is this alleged pattern Bayesian evidence for, and what hypothesis is this alleged pattern Bayesian evidence against? If there’s a hard takeoff in 2040, I guarantee you that someone in 2039 will publish a prediction that AGI is still 40 years away.

Also, some amount of “prediction creep” is rational to the extent a phenomenon resembles a one-shot poisson process. Suppose my personal life expectancy today is that I will die, on average, at age 78.58236. If, tomorrow, I find that I haven’t died yet, I should slightly increase it to age 78.58237. I *expect* this expectation value to increase slightly on every day of my life, except for the day I die, during which the expectation value decreases drastically.

Further discussion brought up the exponential distribution and its property of “memorylessness”, meaning that the distribution conditional on failure from time 0 up to time t looks like an exact copy of the original distribution shifted t to the right. (If you have a Poisson process, i.e. one where events occur independently and are equally probable at each time, then the probability distribution for when the first event happens is an exponential distribution, so this is the same thing Rolf said.)

The question now is, if according to experts (chosen in a certain defined way) AI has always been 40 years away, what does this prove? I don’t have a full answer, but I do have some comments.

  • In a distribution that’s like the exponential (and with a 40-year mean) but thicker-middled and thinner-tailed (in some sense that I’d have to think about how to quantify), upon observing 40 years of failure, the conditional distribution will be concentrated in its head and will have a mean less than 40 more years away. This is the sort of distribution you get if, e.g., the amount of work that remains to be done is a known quantity, and a random amount of work is being done i.i.d. in each time period.
  • If this is true it makes the experts look OK; on reflection they made no mistake in predicting a mean of 40 years.
  • In a distribution that’s like the exponential (and with a 40-year mean) but thinner-middled and thicker-tailed, upon observing 40 years of failure, the conditional distribution will be concentrated in its tail and will have a mean more than 40 more years away. This is the sort of distribution you get if, e.g., you add a couple of different exponential distributions with means spread around 40 years to represent the experts’ ignorance of what mean they really should have predicted. The increase in the predicted time until the event would represent their learning that, apparently, the exponential distribution with higher mean than 40 years is a better model.
  • If this is true it makes the experts look bad; they underestimated the difficulty of the problem.
  • In reality, again if the observation is true, some combination of the above two effects is probably in play, with the effects on the mean canceling each other out. (Intuitively I’d say that if they cancel out, you need “more” of the second effect than the first effect.)
  • But if neither effect is going on and we’re seeing a simple exponential distribution in action, what is that Bayesian evidence for? Well, according to this distribution, if it started 40 years ago then with probability 1-1/e AI should have happened already. According to the alternative hypothesis that 1) experts are like tape recorders programmed to say 40 years whatever the facts are and 2) there’s actually a negligible chance of it happening soon, AI should have happened with probability 0. The evidence favors the latter hypothesis over the former hypothesis with a Bayes factor of e. (Of course, you have to consider priors.) So I’m not sure that pointing at memorylessness gets you far; instead of “why haven’t predicted times decreased?”, you get the question “why hasn’t it happened yet?”.
  • Of course, everything depends on how you pick what expert predictions to listen to. Just because by one method you can get suggestive evidence that the experts have been over-”optimistic”, it doesn’t follow that a more sensible method would yield more “pessimistic” predictions for the future.
  • Just extrapolating to “AI will always be ’40 years away’” is all kinds of naive — I think everyone here can agree on this.

Ice, On Ice

On an old thread at RealClimate, Onar Åm proposed pumping ocean water onto Antarctica to counteract sea level rise, and roughly calculated an energy cost of 35 GW to offset a 30cm rise over 100 years. (World energy usage is estimated at 1500 GW.) I have not seen this idea anywhere else. It sounds too good to be true. Is it reasonable? How far could we push it if we had cheap futuristic energy? As just one example, apparently it would take only about a 30m sea level drop to liberate Doggerland from its fascist oppressor of several millennia. By that time, I guess one worry is where you put all the extra ice.