Creature Or Technology

When people think about artificial intelligence — not things like chess computers, but the full-blown human-equivalent kind that doesn’t currently exist — they often think of each AI design as an individual creature.

A more enlightening way to see things is to consider each AI design as a technology, a way to arrange matter to produce some desired result. Just like the technology called “the wheel” is a way to turn a thing that just sits there into a thing that rolls, an AI design is a way to turn any sufficiently powerful computer into a thing competent at figuring out truths and motivated toward achieving certain goals.

Most science fiction is misleading on this point. It’s as if stone age people had stories about the future where one of them invented a Wheel and another of them invented another Wheel and another really smart one invented two different Wheels, so they could put their life’s work together and build a cart and have it go on wondrous adventures through an otherwise wheelless world.

Once we’ve built our HAL-9000 or Data or KITT or GSV Ravished By The Sheer Implausibility Of That Last Statement (leaving aside fictional AI realism problems), it doesn’t just mean we now have a novel individual mind, or a small group of such minds representing different versions. It means we have once and for all discovered the secret of transmuting stupid into smart.

Cognitive Turk

If you knew everyone’s opinion on everything, you could extract lots of useful information from the correlations. So maybe there should be a Web 2.0 thing that let users answer a lot of controversial questions and maybe display them in a profile.

Then you could let people query it based on user-defined criteria. (“Among people over 50 who believe in string theory, who’s considered the favorite to be Cthulhu’s running mate in 2012?”)

You could also try out many algorithms to figure out which one best turned opinion data into the right answers to objectively scorable questions. (“What will the temperature be in a year?”) Then you could apply that algorithm to answer all other questions.

Potential problems abound. For example, objectively scorable questions are a biased subset of all questions. Methods used to extract the most reliable answers to them may not generalize. Also, there would be “strategic voting”-type issues.

If these problems could somehow be solved or contained, the result would arguably the most authoritative source on Earth, and a new argument for majoritarianism. (I picture it coming with the sound of a booming voice saying, “you dare disagree with Authoritron?”. That way it will reduce irrational overconfidence in one’s personal opinions. Social psychology, etc.)

(This is a similar idea but with the intent of fixing inconsistencies in an individual set of opinions rather than using other people as authorities.)

Second Nature

Anne C. at Existence is Wonderful writes:

A big part of what I find wonderful about existence has to do with being able to look at things in my own way, without anyone telling me what I am supposed to be seeing, or what is “important” about the environment I inhabit.

That is one huge thing I see missing in exhortations of the delights of VR — the acknowledgment that no matter how well-rendered, it is still going to be a case of someone else designing everything you experience.

I deny that, for sufficiently advanced VR, the thing to be acknowledged is true. This ties in to a “fun theory” speculation I’ve been meaning to write for some time now.

I can see far-future people, whether “real” or “virtual”, inhabiting three different kinds of places:

  1. Places designed by people.
  2. Places designed by something else.
  3. Places not designed by anything, created by mechanically unfolding a limited amount of seed complexity.

Continue reading

Shiny Pitchforks

Sometimes people set up strawman arguments because they’re easier to knock down than the real thing. But that’s a beginner mistake.

What I’ve seen far more often in reasonable people (and me), is that they come to a discussion with some point in mind that they want to make, because they think it’s underappreciated or because they had an “aha” experience thinking of it, and then interpret others as saying the thing countered by that point, when in fact those others are saying something subtly different.

If all you have is a +5 holy pitchfork “Scarecrowbane” that you obtained through perilous questing, every stranger looks strawy.

(With posts like this I am probably just being a captain unto the obvious, but I think it’s good to keep naming and shaming these tendencies. Overcoming Bias and logical fallacy lists do well already, but how about a TV Tropes for argument patterns?)

Post-Apocalyptic Puzzles

Consider those future events that destroy most, but not all of human civilization; maybe we can call them “Oomsday”. For example, a global pandemic in which everyone outside Madagascar dies.

How can we make sure that if we fail to prevent such an event from happening, survivors manage to bounce back to a good outcome in the long term?

The obvious approach is to send information to the future in apocalypse-proof time capsules. But these are rather crude tools.

As designers of computer-based time capsules, we can exercise a far more fine-tuned control over the future than by just telling them what we know. Through passwords, timer mechanisms, and “security through obscurity”, we can choose the order in which pieces of knowledge become available. We can make sure a piece of knowledge is only available to people who already understand a different piece of knowledge, and perhaps even turn this into some sort of incentive structure. Or we can try to make some pieces of knowledge available only to people with certain interests. (Caricatured example: in a huge database, file information on nuclear physics under “how to be a good guy and help people”.)

In reality, information will probably not be lost so completely and cleanly. But perhaps with these ideas in mind, we can better help post-apocalyptic people make the best of a horrible situation.

Causal Bottlenecks

Earth-2009 is at a causal bottleneck in the sense that decisions taken now can influence those features of the far future that we consider important, most obviously but not exclusively by causing or preventing the extinction of civilization.

The point I want to make in this post is that, purely as a matter of math, this has to be an extremely unusual state of affairs.

If decisions taken in the 21st century account for 20% of the variance in the value (measured in whatever way) of the universe in 1 million AD, then it cannot be the case that decisions taken in the 22nd through 30th century also each account for 20% of this variance.

A general argument along those lines shows that, for nearly all time periods, if you plot the correlations of the decisions taken in that time period with the values of the universe at times in that period’s future, you’ll get a graph that converges to zero. The past has been an exception. Future people will still be able to influence their future; but the consequences of their actions on the global state of things will be slowly papered over with decisions taken in the future’s future. This will make more emotional sense to humans; common-sense morality assumes people nearby in time are most of what matters.

I’ve made some assumptions here. One of them is that there’s a chance we’ll one day be able to get existential risk down to a negligible level. I believe this to be true, but if not, forget what I said about influencing the long-term future; there will not be one. Another assumption is no boundless growth. With boundless growth, some of the conclusions here are worth re-assessing (does total value increase faster than correlations die out?), but one conclusion that stands is decisions entangled with a fixed fraction of the future’s value — such as those relating to existential risk — are both unique and uniquely important.

As a final worry, if causal bottlenecks are as rare as this argument shows, do we in fact have enough information to overcome the low prior probability that we are in one?

Quantum Versus Doom

Assumptions:

  1. The many-worlds interpretation of quantum mechanics is correct
  2. If so, in a nontrivial fraction of worlds we go on to ancestorize huge numbers of people
  3. The reasoning behind the doomsday argument is correct: if the future (all worlds combined, weighted by quantum measure) contains orders of magnitude more people than the past and present, we should be surprised to live so early

Together, these assumptions produce surprise. (Ancestor simulations might affect the reasoning here, but only, it seems to me, if they’re a significant fraction of the universe’s computing resources.)

If I said, “we have strong independent evidence for 1 and 2, so 3 has to be wrong”, would I be blending science and philosophy in a legitimate way? Or is the truth of 3 something we can only find out by a priori reasoning? I believe that the blending here is legitimate, and I already believed 3 to be false anyway, but making this argument still leaves me with an uncomfortable feeling.

Quantum Immortality: First Salvo

Jacques Mallah has put a paper on the ArXiv arguing against the theory of “quantum immortality” (which says that as long as there remains at least one quantum world with a copy of you in it, you should expect to stay alive) and the related idea of “quantum suicide” (which says that, since you’re “quantum immortal” anyway, you might as well correlate your death with things like not winning the lottery). I’m not yet sure I’d endorse the whole paper, but like Mallah, I believe in the many-worlds interpretation of quantum mechanics, and like Mallah, I do not believe that it implies quantum immortality. I promised long ago to write a post against the idea, but I didn’t get around to it, so I’ll take this opportunity to expand on something Mallah says and sort of get things started:

Continue reading

Distrust Indirect Arguments

An argument from authority is an indirect argument; it tells you there are probably direct arguments that convinced the authority.

An argument by analogy is an indirect argument; it tells you there are probably direct arguments for the thing you’re considering that are like the direct arguments supporting the analogous thing.

An argument with ambiguous words and sentence constructions in it is an indirect argument; it tells you that if you picked the right exact meanings and stuck to them all the way through the argument, you would probably find yourself with a direct argument.

In all these cases, “logical information” is being left off the table. Some of the same considerations apply here that apply when ordinary facts are withheld.

If you know the arguer is arguing in good faith, indirect arguments are merely a noisy (and “causally distant”) signal.

But to the extent that the arguer could be arguing in bad faith — as an advocate rather than a truth-seeker — the fact that the direct arguments weren’t specified is itself evidence they don’t work, just like when facts aren’t given, that’s evidence they don’t go the arguer’s way.

So I would say that, because indirect arguments create space for advocates to operate in, discussions between truth-seekers can be “loose” and still be informative, whereas discussions between advocates should be “tighter” if they are to have much value.

Be Wise, Disanalogize

Any argument by analogy can be transformed into an argument not by analogy. If you’re not sure whether you believe A, but you already believe B which someone says is analogous to A, then the procedure is as follows:

  1. Figure out all the reasons why you believe B, including the ones you haven’t yet verbalized.
  2. Throw out all the reasons that don’t also apply to A.
  3. (While you’re at it, throw out all the reasons that are wrong and adjust your belief in B.)
  4. List all the reasons that do apply to A, deleting all references to B.

An argument by analogy is not so much an argument as a (sometimes credible) promise of one; it is a mark of homework left undone.