Singularity Research Challenge

SIAI (where I’m currently a volunteer) is doing another matching challenge campaign. This time you get to choose what specific projects to fund. Michael Anissimov has more details.

Here are some reasons to invest in reducing existential risk that you might not have considered before:

  • “The religions disperse, kingdoms fall apart, works of science would have been invented anyway, but feats of existential risk reduction remain for all ages.”
  • Stories where the world is saved excitingly depend necessarily on the real world being saved less excitingly.
  • To make the world a better place, you must first make the world a place.
  • Think of it as extreme survivalism: everyone lives.
  • Even if you believe Armageddon is coming, wouldn’t it be embarrassing if we went extinct before that happened?
  • Reducing existential risk just means saving the whales with extremely broad safety margins around the definition of “whale”.
  • If we go extinct, all possible terrorists win.

Singularity Summit 2009 in New York

SIAI is organizing the 2009 edition of their yearly Singularity Summit on October 3rd and 4th. Unlike the 2006-2008 summits, which were in the Bay Area, this one will be held in New York.

For interested people in East Coast US and in Europe, especially, the Summit seems a unique opportunity to see speakers of various awesome expertise on the kind of subjects this blog talks about. Subjects are broadly based around the idea of the technological singularity, but look like they will include cognitive enhancement, neuroscience, the philosophy of mind, nanotechnology, and future forecasting. Some out of many interesting speakers are David Chalmers, Ray Kurzweil, Philip Tetlock, and Peter Thiel.

The technological singularity concept recently got some front page NYT coverage – evidence that it’s taking off in the media. Still, if you come to the Summit or help spread the word, it should still be early enough that you get to say you were into this stuff before it was mainstream.

Rapture versus MechaRapture

Interestingly enough, RaptureReady.com has a piece up claiming:

transhumanist anticipation of the singularity is comparable to Christian anticipation of the second coming of Jesus Christ

To those of you who dismiss the singularity as the “Rapture of the Nerds”: are you sure you want to agree with RaptureReady.com? Ha!

Read the whole thing, but only if you’re looking for entertainment:

A movement which views its ultimate purpose as bringing enlightenment to the universe sets itself up in direct opposition to God’s own purpose … their ambition — like Satan’s — will one day lead to an outright physical confrontation with God Himself. It’s a battle that God will win.

I’m In Your Basket, Stealing Your Precious Eggs

Every so often, people accuse singularitarians of advocating that we should ignore other issues such as arms control and environmental protection and put all our eggs in the singularity basket. These people are making the simple mistake of not thinking at the margin. Sure, if society invested its resources in a safe singularity only, that would be irrational. We should address the world’s various problems with a broad mix of strategies. But right now, the fraction of resources society is investing in ensuring a safe singularity is negligible. We have a long way to go before this fraction is anything near where it should be, and a unit of extra effort toward a safe singularity becomes about as useful as a unit of extra effort invested in more popular strategies. You have only a small fraction of the world’s eggs; putting them all in the best available basket will help, not harm, the global egg spreading effort.

Rapture of the Nerds, Not

The idea of a technological singularity is sometimes derided as the Rapture of the Nerds, a phrase invented by SF writer Ken MacLeod [update: this isn't true, see his comment] and popularized by SF writers Charlie Stross and Cory Doctorow. I can take a joke, even a boring old joke that implies I’m a robot cultist, but it irks me when jokes become a substitute for thinking. There’s always someone in discussions on the topic who uses the comparison to fringe Christian beliefs about the End Days as if it’s some sort of argument, a reason why all the developments postulated by those who do take the singularity seriously will fail to materialize.

Although the parallel — like any appeal to authority, positive or negative — might work as a heuristic, a hint to look harder for more direct and technical criticisms, it of course fails as such a criticism itself. When computing atom trajectories in a supercomputer, or in nanotechnological devices, Mother Nature doesn’t check the consequences against a List of Ridiculous Beliefs, rejecting any outcomes too similar to those expected by the uncool and stupid.

Now, it could be that if there’s a close similarity between the singularity and the rapture, this points at some sort of psychological flaw shared by believers in both, a seductive but irrational attractor of the human mind that sucks people in, with those raised religiously dressing it up in terms of God, and us technologically-oriented atheists imagining a human-made God-substitute. But that image of a shared psychological flaw is itself so seductive that it has distorted people’s view of what the singularity is about into a kind of geek-bible-wielding strawman — singularitarian ideas are assumed to parallel fundamentalist Christian ideas even where they don’t, just because the comparison is apparently so much fun. “Oh, look at those silly nerds, aping the awful fundies without even knowing it!” In this post, I will list some (but not all) ways in which the singularity and rapture resemble each other less than some people think.

First, though, it’s worth listing some ways in which the singularity and the rapture do resemble each other. Both deal with something beyond human rules emerging into the world, something so powerful that, if it wanted to, it could make human effort from that point on irrelevant. Some predictions about the singularity have included ideas that this power would suddenly help us “transcend” human life and our human bodies, with uploading, in the critic’s mind, parallelling God snatching true believers up to Heaven. And with such an event looming on the horizon, it’s only to be expected that both groups would take the possibility very seriously, in some cases even making it a central concern in their lives.

Now, some differences:

  • Rationalism: Whatever you want to call it — critical thinking, debiasing, epistemic hygiene — I don’t know of any movement that emphasizes this nearly to the extent that the singularity movement does. For example, a few of the posters at Overcoming Bias (a group blog that I highly recommend) are involved somehow in the singularity movement, and several others buy into some sort of transhumanist worldview. I think this convergence is a good sign. Here’s an article by Eliezer Yudkowsky (pronounced “Frankensteen”) that talks about how to avoid biased views of the future. Whatever you might call the rapture believers, paragons of rationality they’re not.
  • Naturalism: Though this should be obvious, all developments associated with a technological singularity would take place within the ordinary physical processes that scientific folks have come to call home. Kooks like Terence McKenna, who connect the singularity to Mayan prophecies, are laughed at by serious singularitarians. Transhuman intelligence, through its ability to explore unexpected corners in solution space, may seem magical, but we all realize no actual magic is involved. I’m not one who would disqualify non-naturalistic claims outright; it’s just that, in my opinion, the evidence so far strongly favors a naturalist world. Still, it’s a difference worth noting.
  • Uncertainty: Contrary to what you might think, most singularity activists don’t think a singularity is in any way an unavoidable consequence of technological progress; the collapse of human civilization, unfortunately, could avoid it quite well. Nor are they anywhere close to absolute certainty that the singularity is going to happen, or happen soon, the way the rapture people have faith in their rapture. It’s possible, after all, that we’ve underestimated the difficulties in making progress in AI, brain scanning, and the like. Experience from past attempts at futurism, as well as psychological research, tells us that it’s easy to be overconfident. But one thing they do agree on is the singularity is worth influencing, and some kinds are worth striving toward. In terms of expected value, even a 10% chance of such a world-changing event should cause many of us to refocus our efforts. A high-profile exception on this point may be Ray Kurzweil, whose overly precise predictions based on such an unreliable methodology as curve-extrapolating should earn him at least a little mockery (though there is also a lot to like in his writings).
  • Human-caused: Rapture believers wait for an external God to save them, independent of human effort. Programming a transhuman AI, on the other hand, is something we humans can do. The singularitarian worldview is sometimes claimed to encourage an attitude of passive waiting. I think that’s an unfair accusation; it actually encourages going out and solving problems, just through a different approach.
  • Nature contingent on human action: Again, contrary to what you might think, singularity activists don’t blindly expect a singularity to be positive. Intelligence sometimes helps humans understand how to bring out the best in themselves, but this will not generalize to AI. Unless thorough precautions are taken from the beginning, a superintelligent AI is likely to be indifferent toward us — not benevolent or cruel, just uncaring, except to the extent that our continued existence might get in the way of whatever it’s trying to achieve. That means it’s crucial for the first successful AI project to work under such precautions. And that is also to say: that a project under such precautions should be the first.
  • No in-group perks: In the Singularity-as-Rapture-for-Nerds analogies that I’ve seen, it’s claimed that the Nerds expect only themselves to benefit, with the rest Left Behind in their misery, in the same way that only Christian true believers are supposed to benefit from the rapture. This seems like a clear example of fitting reality to your analogy when it should be the other way around. I haven’t seen any predictions by singularity advocates that restrict the effects to some elite group of techno-savvy westerners, nor have I seen anyone advocate that this should happen. The singularity is supposed to benefit humanity, not some particular group, and singularity activists understand this. If we succeeded at building an AI that cared enough to leave us alive in the first place, the AI would almost certainly be enough of a humanitarian to help all of us, and with maybe thousands of years of progress in a short time, it would have the resources to make this easy. Scenarios where only the rich and 31337 classes benefit from technological progress seem like a possible danger (though, if the cost of new technologies falls quickly enough, only a temporary one), but these are always pre-singularity scenarios.
  • No religious trappings like rituals, worship, or holy writings. I’d expand on this further if this post were about comparisons to religion in general, but it’s specifically about rapturism.
  • No revenge: One of the dynamics fueling religious rapture beliefs is the expectation of unbelievers being deliciously proved wrong when it happens, after which horrible things will happen to them. As far as I know, no one in the singularity movement deals in anything like these revenge fantasies. This is a good thing.
  • No anthropomorphism: The Christian God is in a way just a big authoritarian alpha monkey, but a superintelligent AI is not expected to think or behave anything like a human. Perhaps it would manifest more like a new set of laws of nature than like a human leader. It might not even be conscious. It would certainly not be a source of arbitrary moral authority.
  • The difference that actually matters, of course, is that a belief in the rapture is not justified by the evidence, and a qualified belief in the singularity, defined as disruptive changes caused by a recursively-improving superhuman AI, is. I have found a truly marvelous proof of this, but alas, it falls beyond the scope of this post.

It’s also interesting to think about what would happen if we applied “Rapture of the Nerds” reasoning more widely. Can we ignore nuclear warfare because it’s the Armageddon of the Nerds? Can we ignore climate change because it’s the Tribulation of the Nerds? Can we ignore modern medicine because it’s the Jesus healing miracle of the Nerds? It’s been very common throughout history for technology to give us capabilities that were once dreamt of only in wishful religious ideologies: consider flight or artificial limbs. Why couldn’t it happen for increased intelligence and all the many things that would flow from it?

It would be tragic if, by thinking of some subjects as inherently religious, we let the religious impose their terms on our understanding of the world.

Speedrunning through Life

An amusing thing you can find on the internet these days is the speedrun, where people try to find the quickest possible way of completing some particular video game, possibly using tools like save states, but always with legal input sequences. There are examples at various different sites.

I suspect that if you went back and asked people who played these games without knowing about speedruns, their estimate of the minimum time needed to complete a game would almost always be far too high. They would think of some simple improvements they themselves could imagine making, then maybe adjust slightly further downward — failing to account for all the tricks they couldn’t imagine.

In these games, despite some glitches, the basic physics corresponds quite closely to the surface rules we intuitively use to make predictions of what can and what can’t be done. In real life, the gap between basic physics and intuitive surface rules is much wider. We probably once thought of fire as an uncontrollable force with a will of its own. We once thought of atoms as, well, atomic — unbreakable things that you could rearrange, but that you otherwise had to take as given. But reality gives us much greater scope for putting in information than any video game, and now we’ve found ways to make many of the rules obsolete.

A superintelligent posthuman being could speedrun through life like some people speedrun through video games. The only difference is that reality has more loopholes to exploit — perhaps no longer in fundamental physics, but in things like engineering, computer security, and human interaction. That is why the consequences of a technological singularity are predicted to be so quick and so extreme.
I could watch this thing for hours.

(image by this fellow)

Will Posthumans Evolve?

Darwinian evolution is the only way to make sense out of the variety of life on Earth. Will it also apply usefully to posthuman beings? Many have suggested that it will. Eliezer Yudkowsky has argued, in the context of superintelligences arising from different alien civilizations, that it will not. But if you look at Earth-originating life alone, and if you believe what Nick Bostrom calls the Singleton Hypothesis — the idea that the Earth and surroundings will end up dominated by some single decision system — the proof that Darwinian evolution will stop applying is much simpler.

There are two related ways to see that, once a society’s power is in the hands of either a single decision system with sufficient intelligence and sufficiently well-defined preferences, or multiple such decision systems with similar preferences, the Darwinian regime is over.

First, no mutations means no evolution. If I were a posthuman and I cared only about painting the universe green, I would do everything I could to make sure my descendants also cared only about painting the universe green. I would check that they didn’t differ in this aspect until I could be assured the probability of a mutation was negligible. If necessary, I could use massive redundancy, try them out in sandboxes, and more.

Second, the fitness landscape isn’t something already out there, determined by the universe itself. It’s determined by whoever is first on the stage. My fitness depends on whether the powerful let me live and reproduce. If the powerful cared only about painting the universe green, and I cared only about painting the universe purple, they could make a high peak in the fitness landscape at “wants to paint the universe green”, and a deep valley at “wants to paint the universe purple”. They could do this simply by suppressing or killing anything that wanted to paint the universe purple.

Fitness would stop tracking anything we intuitively might think of as fitness-improving behavior. If the powerful happened to want live in a universe where everyone blew up half their stuff with dynamite every year, people who didn’t blow up half their stuff with dynamite every year would have lower fitness than people who did.

This story doesn’t rule out some kinds of evolution that are limited in scope. It also doesn’t prove stably Friendly preferences can be defined and implemented in an AI. But superintelligences will always have the means and the motive to stop evolution from overtaking them and thwarting their main goals. Once the preferences are there, evolution will not be allowed to erode them.

Update: I just looked at Bostrom’s The Future of Human Evolution, which I’d read but forgotten about, and it turns out he makes the same two points in sections 7 and 8. Recommended reading if you’re interested in the subject.