Bill Gates Mentions the Risk of Superintelligence in the Wall Street Journal

Bill Gates is smart in a way that other corporate titans of the 90s and 00s just aren’t. Smart as in intellectual with a broad range of knowledge and information diet, not “smart” as in wears a trendy turtleneck and has a good design and business sense.

In a recent article in the Wall Street Journal, Gates takes on Matt Ridley’s books like The Rational Optimist: How Prosperity Evolves. Gates writes:

Exchange has improved the human condition through the movement not only of goods but also of ideas. Unsurprisingly, given his background in genetics, Mr. Ridley compares this intermingling of ideas with the intermingling of genes in reproduction. In both cases, he sees the process as leading, ultimately, to the selection and development of the best offspring.

The second key idea in the book is, of course, “rational optimism.” As Mr. Ridley shows, there have been constant predictions of a bleak future throughout human history, but they haven’t come true. Our lives have improved dramatically–in terms of lifespan, nutrition, literacy, wealth and other measures—and he believes that the trend will continue. Too often this overwhelming success has been ignored in favor of dire predictions about threats like overpopulation or cancer, and Mr. Ridley deserves credit for confronting this pessimistic outlook.

Yes, this is common — who wants to be the doomsayer? It’s just not popular. Although dire predictions often fail, terrible things still happen completely unpredicted, like Hurricane Katrina, the global financial disaster, the East Asian Tsunami, and the Holocaust. Pretending that because history has been mostly good, we should take a blanket optimistic outlook is just Whig history nonsense. Whig history is the line we were all fed in school, and its main purpose seems to be to tell us that the status quo is great and there is nothing to worry about.

Gates goes on to talk about how Ridley’s two other arguments, that 1) Africa is hurt by foreign aid and will do better without it, and 2) climate change is not as big of a deal as people think, I won’t comment on either of these, because most peoples’ opinions are based on cultural theology rather than critical thinking. What did get me excited, though, was this part:

There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic. (Mr. Ridley briefly dismisses the pandemic threat, citing last year’s false alarm over the H1N1 virus.) But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people. (Natural catastrophes might seem like good candidates for concern, but I’ve been persuaded by Vaclav Smil, in “Global Catastrophes and Trends,” that the odds are very low of a large meteor strike or a massive volcanic eruption at Yellowstone.)

Ridley shouldn’t dismiss the pandemic threat, obviously. You’d think that a deadly natural plague that killed 3% of the world population and infected 27% a century ago would be enough to take it seriously for centuries to come, simply based on Bayesian likelihood estimations, but I guess not. I wonder if the widespread availability of genetic engineering tools for creating new microbes causes Ridley to update his estimation of disaster from the likelihood estimation simply based on history upwards more than a couple percent.

The quoted paragraph is also interesting because it’s the first time I’m aware of that Gates has come out this strongly about the machine threat, and even uses the term “super-intelligent”. I wouldn’t be shocked if Gates has read all of Nick Bostrom’s papers on the superintelligence threat or perhaps has even visited this blog. Who knows? A little unwarranted optimism is cute and harmless when it comes to celebrities visiting one’s blog, but it becomes dangerous and destructive when applied to the course of civilization as a whole.

No optimism. No pessimism. Realism. Optimism and pessimism are inherently irrational because they imply a bias across all possible hypotheses, and the emphasis is on the affect (feeling), not the descriptive content of the hypothesis itself. If anything, pessimism is more rational. See the planning fallacy and rational pessimism. One study on the planning fallacy found that people who were depressed tended to be the most accurate when estimating the completion time of projects.

I find it funny how many people in the transhumanist community, miffed at the attention the Singularity has been getting, seem to wish that transhumanists would just ignore the risk of superintelligent machines, while people like Bill Gates are just starting to write about it in public. This is the time to step forward, not back. The finance giants of Wall Street should know that they can have a personal impact on the risk of superintelligence by donating to non-profits like the Singularity Institute and the Future of Humanity Institute. Peter Thiel certainly realizes this, but most moguls don’t. The people and infrastructure exist to make use of much larger funding levels, and it’s incumbent on philanthropists to step forward.

Comments

  1. billswift

    You can never know just how “realist” your beliefs are. So I prefer to perform the old wilderness navigation trick of staying slightly off course, but in a known direction, so you know which way to adjust your course when you get closer to your destination. Like the “Lazarus Long” quote, “Pessimist by policy, optimist by temperament… By never taking an unnecessary chance and by minimizing risks you can’t avoid. This permits you to play out the game happily, untroubled by the certainty of the outcome.” Insurance of various sorts also plays into the “pessimist by policy” scenarios.

    • Alexander Kruel

      If you follow that course that doesn’t mean others will follow it as well. Someone will work on superhuman AI. Even if it is far off, the extreme risks from AGI outweigh low probabilities. Given all we know it is a reasonable conclusion that AGI is possible and that any AGI without safeguards will pose an existential risk.

      Also, we shouldn’t give up on it just because it is a hard problem .

  2. Jon

    It’s not that we should “just ignore the risk of superintelligent machines,” it’s that we CAN’T possibly do much of anything to mitigate the risk. Superintelligent machines, by definition, will be smarter than we are and able to foil any safeguards we’ve put in place if they so desire.

    Ultimately I think the obsession with making sure our AIs have good ethical values is nothing more than the illusion of control. You can’t just install Asimov’s Three Laws into a robot and assume all is well, any more than you can guarantee a human being will turn out well due to a good upbringing.

  3. Bernard

    AH1N1 was a false alarm? Fuck you, Ridley!
    You should’ve come to Mexico and seen your “false alarm”.

  4. Anon

    Those completly unpredictable events were not really all that unpredictable, engineers warned that new orleans was going to sink due to being massively unprepared. The financial market had every indicator save for red flashing lights and air raid sirens going off at wallstreet quite a while before it caught fire and crashed. The tsunami was unpredicted but there was early warnings that went ignored before it hit the shore. The holocaust and other genocides i would put in an entirely different category, and yes they are predictable, and if not predictable they are detectable at early stages.

    As for flesh reaping AI, that’s predicted to the point of being the default percived modus operandi of AI. The problem with this being that we have neither superintelligent or even mentally retarded AI to work with. The narrow AI applications we have today are trained to a narrow specialization and no possibility to diverge from this.

    I see no point in being excessively paranoid about a system which doesn’t exist in neither practice nor theory but only as a vague conceptualization of superhuman, or as some would call it, weakly or strongly godlike system that simulatenously is as irrational as a spoiled child and as heavily armed as the united states army with potential supergoals more vile than the antichrist. Oh certainly, our first human-comparable AI may be irrational and could hurt people, but this due to be being a dumb and witless infant going for your toes with that wooden toyhammer. Or it could turn out it will be a apathic savant, solving all problems with flying colours but having no drive or desire for moving, thinking or doing anything once the problem is solved.
    If we create the AI as a brain simulation it’s of course more likely to behave irrationally, especially as we’re unlikely to have a perfect understanding of the human brain when we build it, but at the same time any such simulation would allow us to do reversible and virtual brainsurgery to induce docility, but that’s only if we follow the simulation approach.

    If i know something about strong AI it is however that everyone have their own personally tweaked visions of AI design. Someone suggests conventional supercomputer brain simulation down to synapse protein level, the other guy conventional supercomputer running a more conventionally programmed software. The third guy argues for memristors as a synaptic model and a specialized hardware design, the fourth guy agrees with specialized hardware but suggests that a 2dimensional array of specialized molecules can do brain-like parallell processing much better than conventional electronic approaches. There’s probably also a fifth guy out there who aruges we should grow neurons on electronics and interface them to create hybrid brains with the best of both worlds. Finally we have the 6th to Nth guy who have their flavours of normal-computer software that with some funding and time can have strong AI in your mobile phone.
    So we’re supposed to invent Airbags, traffic rules, ABS braking, deformations zones and all other safety features based on a prediction that “horses will be really fucking fast in the future”, of course it turned out that the really fast horse were the internal combustion engine which made the planning ahead a bit difficult, it also left some guys with a lot of useless carbon fibre horseshoes.

  5. Michael Vassar

    I hear good things about Jeff Bezos too. And then there’s the other Microsoft billionaire that people mostly forget about.

  6. Oh yeah, Paul Allen… for me, his interest in sports, music, and “toys for big boys” “philanthropy” (a museum for military aircraft) makes me skeptical of his devotion to futurist causes, but maybe I should reevaluate.

  7. I LIKE YOUR BLOG A LOT!!! I’m trying this tutorial today. I’m so excited. When I’m done I’m planning to post it on my blog. I’ll send you a picture too.

  8. I WILL KILL OLL OV UZZZ BURN IN HELL!!! HA HA HA

    PS
    I HATE FRENCH PEPS

  9. I actually searched hardly a thought like these one, Your article is excellent; the problem is something that not enough persons are speaking intelligently about this toughts. I am very glad that I stumbled across this share in my seek for one thing referring to this post. :)

  10. Thank you, I’ve recently been searching for information about this topic for ages and yours is the best I have found out so far. But, what concerning the bottom line? Are you certain in regards to the supply?

  11. I’d personally like to retire in the English countryside

  12. When I first saw this title Bill Gates Mentions the Risk of Superintelligence in the Wall Street Journal | Accelerating Future on google I just whent and bookmark it. I think this is a real great post.Really looking forward to read more. Really Great.

  13. Hello there! I could have sworn I’ve been to this
    web site before but after going through many of the posts I realized
    it’s new to me. Nonetheless, I’m certainly happy I came across it and I’ll be book-marking it and checking back regularly!

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>