Charles Stross’ Singularity-Clueless, “New Scientist”, Yawn-tastic 21st Century Future

I usually don’t read Charles Stross’ blog, but I saw a post linked from Brian Wang’s blog, which I read all the time, so I checked it out. It’s Stross’ “21st century FAQ”, where he says the 21st century will be “pretty much what you read about in New Scientist every week”, something I would just laugh at and ignore if it weren’t the case that so many transhumanists read Stross’ books. It’s important to criticize this statement because the way we handle the 21st century will be based on how we anticipate it, and if we expect it to be more of the same, we’ll be blindsided by the civilization-transforming changes to come.

After saying everything will pretty much be the same for the next 91 years, Stross mentions “unknown unknowns”, which are “possible sources of existential surprise”, and points to “biotechnology, nanotechnology, AI, climate change, supply chain/logistics breakthroughs to rival the shipping container, fork lift pallet, bar code, and RFID chip”. The interesting thing is that many of these so-called “unknown unknowns” are not very unknown at all. Yes, they could unfold in unknown ways, but we see press releases and headlines every single day that point to near-term major advances in all these areas (except climate change — that’s not really a technology). Brian Wang does a particularly good job of covering these at his blog, as do the people behind the KurzweilAI.net news feed, and of course the science super-sites of PhysOrg and Eurekalert.

There is no huge surprise. Biotechnology, nanotechnology, AI, and major advances in supply chain/logistics will predictably deliver massive disruptive changes in the next 25 years or less. Synthetic biology is taking off as we speak, and nanotechnology will soon begin having a major impact that will become abruptly obvious in our daily life. Artificial Intelligence is less predictable, but the AI winter has long thawed and the way that affordable computing is approaching the computational capacity of the human brain demands attention. The combination of cheap ubiquitous sensors and facial-recognition AI in the late 2010s or early 2020s will lead to a huge wake-up call about the inevitability of transparency. Today, unsolved murders in major cities are routine. In 10 to 15 years, they will be confined to private buildings and other places where ubiquitous public cameras don’t exist.

Synthetic biology will lead to major energy breakthroughs by 2025. Any day now there will be an announcement that Mycoplasma laboratorium exists and is self-replicating in a petri dish, and the era of artificial life will begin. After that, the organism will be custom-tweaked for producing biofuels, or whatever else its creators can implement using the synthetic biology toolbox. The “wonder bacterium” and “Microbesoft” may not emerge overnight, but will emerge in 10 to 20 years, tops. To say that there will not be a major, world-transforming technological revolution from synthetic biology in the next 91 years is to massively underestimate the disruptive potential of rewriting the book of life. This is not an “unknown unknown”. This is predictable massive disruption. Fossil fuels will still be used because of their excellent energy density, but bio-manufactured fuels will grow to 50% of the energy pie or greater.

Progress in nanotechnology will either speed up to a pace reminiscent of the microchip revolution in the 1980s and 1990s, if molecular nanotechnology is difficult or impossible, or to a pace greater than 10 times that of the microchip revolution, if molecular nanotechnology can be achieved in the next couple decades. If MNT based on engineering principles and rigid nanostructures proves impossible, then another type of MNT based on bio-inspired designs will emerge instead, somewhat later than hoped for, but certainly before the end of the century. This will permit high-throughput, decentralized, personalized manufacturing for dirt cheap all over the world, and is likely to happen before 2040 or 2050, not 2100.

The only events which will stop these massively disruptive technological milestones are either comprehensive planetary backlashes or a Singularity from recursively self-improving Artificial Intelligence or Brain-Computer Interface-derived superintelligence. That’s the other thing in Stross’ post that I strongly disagree with: he calls the Singularity the “rapture of the nerds” and says it “is likely to be a non-participatory event for 99.999% of humanity — unless we’re very unlucky. If it happens and it’s interested in us, all our plans go out the window”. It seems unlikely that Stross could annoy me more with these two sentences if he custom-designed them for my annoyance. Issues:

1) The Singularity is not “the Rapture of the Nerds”. It is a very likely event defined as the technological creation of greater-than-human intelligence. Its likelihood comes from two facts: that intelligence is inherently something that can be engineered and enhanced, and that the technologies capable of doing so already exist in nascent forms today. Even if qualitatively higher intelligence turns out to be impossible, the ability to copy intelligence as a computer program or share, store, and generate ideas using brain-to-brain computer-mediated interfaces alone would be enough to magnify any capacity based on human thought (technology, science, logistics, art, philosophy, spirituality) by two to three orders of magnitude if not far more.

2) If superintelligence were created, how could it possibly keep itself to 99.999% of humanity? Either it will not be created and impact 0% of humanity or it will be created and impact 100%. Inventions created by even average human engineers and scientists have found their way to every corner of the Earth. How could inventions created by qualitatively superior intelligence not find usage around the world? If the superintelligence were created using process X, what is to stop the creators of the process from applying it to anyone who is sufficiently interested? Even if they kept it to their little group, they would likely impact 100% of humanity in a negative way, by monopolizing the world economy.

3) Stross hints that if the Singularity impacts more than 0.001% of humanity, we’re necessarily unlucky. Maybe this is because he believes in the strong version of the Event Horizon thesis, where the Singularity is necessarily unknowable and therefore bad? Why can’t unknowable be good? In any case, I disagree with the strong Event Horizon thesis — in this formulation of the Singularity (“everything is unknowable, and let’s not try to know it”), the outcome is equally good/bad whether the superintelligence (SI) that sparks the Singularity is derived from Hitler or Ghandi. This is false, so Stross’ conception of the Singularity is false. Even a supreme SI will be a product of its starting motivations, even if those motivations have folded over themselves a million times. Once you realize that facts and values are fundamentally different things, you see that values are arbitrary and facts are not, so a variety of different SIs with varying moralities are possible, though we have an interest to create human-friendly SIs only and expect those SIs to prevent the creation of human-unfriendly SIs. The starting motivations of an intelligence will necessarily be somewhat preserved by the intelligence as it bootstraps itself to superintelligence. We can maximize the probability of a beneficial outcome by creating Friendly AI or coming up with some human intelligence enhancement scheme that reinforces benevolence. Not knowing the outcome for sure is not an excuse not to try, and there are reasons to believe that an attempt at benevolence will pay off. Any self-transparent intelligence with the ability to edit its own source code is more likely to be able to retain its core morality (which, by definition, it wants) than its external environment will be likely to force a change upon it.

The Singularity is not necessarily some inscrutable superintelligent monolith. It would be a being or collection of beings which could either make our lives radically better or snuff them out very quickly. Ensuring the former is humanity’s #1 priority right now. Rising sea levels and warming ecosystems, while environmentally very troubling, are not nearly as serious as the whole of mankind being annihilated by an entity smarter than it.

The impact of superintelligence could be entirely antithetical to its origin: pre-Singularity technology. Like does not necessarily proceed from like. A flower does not look like a seed. The products of superintelligence need not look like the supercomputer it was originally run on. A superintelligence working to reshape the world in a way that humans actually like might green the deserts, clean the environment, and promote decentralization so that humans could enjoy the planet more fully and utilize its space in the most parsimonious possible way. Take Stross’ idealized future, the 2100 he would prefer more than any other option, and a superintelligence could conceive that and help implement it, also considering the idealized futures of every other human being on the planet. If humans are useful for thinking of and implementing positive futures, then a superintelligence will be even more useful. To malign human-friendly superintelligence would be to malign humans in the very same sentence, because anything that human-friendly humans could do, human-friendly superintelligence could do better. (Of course, you could malign the very idea of superintelligence if you think it’s impossible, but if you do accept the premise, then you cannot reject human-friendly superintelligence without rejecting humanity itself.)

All our plans will predictably go out the window because the Singularity (superintelligence) will predictably be created in the 21st century. The only “unknown” part is whether we create superintelligence with initial motivations that cause it to self-improve and wipe us out or with motivations that cause it to self-improve and be on our side. If there’s no way of setting up initial motivations such that it helps us, then we are doomed unless we install a worldwide totalitarian government to outlaw all computers and prevent the creation of superintelligence forever. Good luck.

Stross says, “If [the Singularity] it doesn’t happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea”. Is it really “sitting around” if you’re actively working towards AI by spreading knowledge and raising support and funds, or learning math or cognitive science in an attempt to formulate a theory that actually has a chance of working with present-day computing power? Hardly. It’s not as if AGI research is taking up a major proportion of the NSF budget or venture capital dollars — even on the principle of “we should try as many routes to helping the world as possible” alone, it seems worthwhile to support AI research for potentially leading to solutions to major problems. Even if AGI research magically fails because the aliens running our simulation keep messing up the software at the last second, it could lead to applications and software programs that provide tremendous assistance in moving us in the direction of more advanced energy technologies, resource utilization schemes, monitoring synthetic biology for rogue biohackers, and the environment for invasive organisms.

Stross implies that there are people sitting around waiting for AI to happen. Who? Even Kurzweil says that AI will emerge out of the hard work of millions of people and thousands of companies. The notion that there are people around Waiting for the Rapture is an invention of Stross and Doctorow, designed to throw more flair into their science fiction stories. Such people are fictional characters, and that’s how they’ve been used. They don’t exist in the real world. The “nerds” who are “waiting for the Rapture” seem suspiciously often to be entrepreneurs and leaders in science and technology.

Stross says the big picture of the 21st century will be that most of us live in cities, an observation so tepid that it makes one wonder how he is a science fiction writer. What about immersive virtual reality? Or the human species taking control of its own evolution? Or software and robotic systems so advanced that many of today’s jobs are rendered obsolete? Or the way that we’ll be able to observe the neural correlates of the details of human experience and look closely into our own dreams and thoughts? In Stross’ 21st century, will none of these things actually happen? Good thing science and technology aren’t bounded by the imaginations of humans in 2009.

At the end of the FAQ, Stross asks, “Are we going to survive?”, then answers, “No — in the long run, we are all dead. That goes for us as individuals and as a species.” This is same boring deathist drumbeat that I’ve been rallying against since I could conceive of the human body as a machine that can be repaired in principle, and especially since I co-founded the Immortality Institute in 2002. This is the defeatist mantra that evaporates the second you hear Aubrey de Grey give a talk, or read a news release about aging being stopped in an entire organ, or view the list of prestigious scientists that support the mission of the Methuselah Foundation. Charles Stross may be a pessimist about achieving longevity escape velocity, but the rest of us are going to fight a War on Aging, and receive the constant attention of the world media en route to success.

(I decided to remove an entire section here insulting Stross’ sci-fi writing, because I realized it was inappropriate to mix criticism of futurism and criticism of fiction in the same post.)

Maybe Stross is a great guy in person. I don’t know him. But I can say that I wildly disagree with both his futurism and his approach to sci-fi. (Insofar as I care about sci-fi at all, which, honestly, is not a whole lot.)

Comments

  1. “What kind of blind fear is this?”

    I think you’re overcomplicating this – CS’s take on this seems pretty obvious to me. Specifically, he says “If [the singularity] happens and it’s interested in us, all our plans go out the window.”. That fits exactly with the strong claim of the Event Horizon school of singularity, as originally postulated by Vernor Vinge: you can’t ever predict what an entity much more intelligent than you is going to do, period.

    This explains a lot about CS’s post. If you accept the strong Event Horizon claim, it follows that designing FAI is impossible. Under those assumptions, the conclusion that the only way to plan past the creation of superintelligence is by assuming that it won’t interact with us at all, kinda makes sense. For the record, that possibility isn’t completely bogus – if our physics allow for sufficiently powerful tricks, a large category of possible UFAIs might jump out of the universe as we know it early in development and never even bother eating our planet.

    In case it wasn’t clear, I’m fairly confident the strong singularity::EventHorizon claim is actually wrong; but the reasons for this aren’t obvious to most people, or even most very smart people, so it’s somewhat understandable that CS accepts it.

    However, even under those assumptions, I don’t quite get why CS is so obsessed with the canned primate way of exploring space. This could just be based on a guess that without SI help we won’t be able to develop reliable and cheap uploading technology in the next 90 years, however.

    That said, I agree with most of your post; though I did actually enjoy reading Accelerando as entertainment. Some of the story’s fundamental assumptions are pretty far out, but the same applies to many Vinge and Egan stories.

  2. Stross: “the 21st century will be “pretty much what you read about in New Scientist every week””

    – When can I expect conservation of momentum to break down? ;-0

  3. Sebastian, I changed the part about the fear of superintelligence. Hopefully it responds more directly to Stross now. Thanks for the correction.

    Roko, New Scientist has always been mmmeeehhh. I was going to comment on that in this post, but didn’t even bother.

  4. You know, a bloggingheads interview between you and charlie stross would be very interesting… Perhaps you should challenge him?

  5. Well that’s all fair enough AFAICS.

    Your point 1 is weakened somewhat by your assertion that the singularity happens to all intelligent species. Do you have any evidence for this? But otherwise the reasoning is sound (and nice to see the qualification concerning qualitative improvements to general intelligence).

    Point 2 seems to be something that Stross has glossed over. Even if a tiny fraction of humanity became superintelligent there would be side effects for everyone if you assume intelligence is the primary driver of technological development, which I don’t agree with.

    But I think Stross’ point about the reverse Pascal’s Wager is entirely correct. AI is obviously one of many things people can pursue but not at the exclusion of other areas of research.

  6. TJ, I’ll change point 1 a bit. That was just my way of saying that we should expect qualitative intelligence enhancement to be a near-universal feature of improving technological civilizations, like the utilization of electromagnetic waves for signal-sending or the development of the wheel.

    Where do you get the idea that humans are #1 on the scale of qualitative intelligence? Do you just have a huge problem with the word “intelligence” in general? If I replace it with “problem-solving ability”, do you still assert that humans are the #1 possible physical arrangement in the universe for attaining qualitative problem-solving ability? I’m wildly guessing that you have a tangential philosophical hangup here, like you are offended when people’s value are measured by IQ tests. Even if you completely ignore IQ as intelligence, inter-species intelligence differentials are dead obvious. Why would humans be the end of the road?

    Take whatever you think is the driver of technological development and call it X. X can probably be qualitatively enhanced. When I say “intelligence”, just think of it as a referent to X.

    How could AGI research possibly threaten to consume all sci/tech funding? It doesn’t even receive 0.1% today. I guess it’s a good sign that so many people think that AI advocates such as myself even vaguely approach the influence necessary to divert attention/funds such that any other area of research would notice.

  7. Stross: “Possible sources of existential surprise include (but are not limited to) biotechnology, nanotechnology, AI, climate change, supply chain/logistics breakthroughs to rival the shipping container, fork lift pallet, bar code, and RFID chip — and politics.”

    (The two things you left out of your quote tell most of the story, here.)

    Anissimov: “The interesting thing is that many of these so-called “unknown unknowns” are not very unknown at all.”

    Well, then I guess he’s not talking about the known bits, then, is he? That would make them known knowns, wouldn’t it?

    Stross: “The rapture of the nerds, like space colonization, is likely to be a non-participatory event for 99.999% of humanity — unless we’re very unlucky.”

    Anissimov: “2) If superintelligence were created, how could it possibly keep itself to 99.999% of humanity? Either it will not be created and impact 0% of humanity or it will be created and impact 100%.”

    Participation and impact are different things. Below, you observe how little funding AI research really needs. That’s not exactly mass-participation.

    Stross: “If it doesn’t happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea.”

    Anissimov: “Is it really “sitting around” if you’re actively working towards AI by spreading knowledge and raising support and funds, or … [busybusybusy]?”

    No, it isn’t. He’s not talking about you, then, is he? Seriously – how many people need to do this, aside from the obvious “everyone we can get”? You yourself observe how little funding and participation AI research needs. The appeal to popularity is only necessitated by the need to scrape up the modest funding that’s still hard to come by.

    So, yes, for the rest of us that aren’t needed to do or fund AI, sitting around is a bad idea. Isn’t it?

    I don’t see anywhere Stross maligns human-friendly superintelligence. I also don’t recall when human-friendliness crept into the definition of the (advent of superhuman intelligence) singularity. Apparently, it isn’t in his. That “the singularity is/can be good, and we should do it” bit is in the definition of singularitarianism, but not singularity.

    That’s quite a parade of straw men. A couple might be understandable, but really? One after another?

    I have a question: in your view, is a “deathist” a person who thinks people should die eventually, or someone who just isn’t convinced that they will never die? Personally, I have no doubt that medical techniques could sustain life in perpetuity, but developing them in time and making them available politically and economically is the trick. THAT’s why I’m sure I’m going to die. So spare the feasibility arguments.

    Only now, do I get to your review of Accelerando. Oh. I see.

  8. Well, then I guess he’s not talking about the known bits, then, is he? That would make them known knowns, wouldn’t it?

    No, I think he’s saying that more of the same is the most likely way the future will unfold, but if any of those technologies have a big impact, it will be “existential surprise”. What I’m saying here is that these “existential surprises” should be taken for granted, and that a future filled with stuff from New Scientist and little else would be the surprise.

    Participation and impact are different things. Below, you observe how little funding AI research really needs. That’s not exactly mass-participation.

    A friendly Singularity would be a massively participatory event. Any human-friendly superintelligence would need the input of many humans to assist us in creating a world we see as better than the current one. I also suggest that everyone who reads my arguments consider putting more effort towards supporting Friendly AI research, so there’s participation there.

    No, it isn’t. He’s not talking about you, then, is he?

    Who is he talking about, then? Who advocates sitting around and waiting for AI? Even Kurzweil says that AI will emerge out of the hard work of millions of people and thousands of companies. The notion that there are people around Waiting for the Rapture is an invention of Stross and Doctorow, designed to throw more flair into their science fiction stories. Such people are fictional characters, and that’s how they’ve been used. They don’t exist in the real world. (I copied this paragraph into my response to Stross.)

    Seriously – how many people need to do this, aside from the obvious “everyone we can get”? You yourself observe how little funding and participation AI research needs. The appeal to popularity is only necessitated by the need to scrape up the modest funding that’s still hard to come by.

    Correct, but I’m trying to solicit the participation of all who read these words. I will feel free to stop entirely when AI research has the resources it needs. Won’t that be the day. Until then, I’ll be creating trouble for those who see the hard takeoff hypothesis as politically or socially menacing.

    So, yes, for the rest of us that aren’t needed to do or fund AI, sitting around is a bad idea. Isn’t it?

    Everyone who reads this is needed to do and fund AI. Until we reach an optimal level of participation. It’s my M.O. to reach that optimal level. 99% of humanity has never heard of Singularity AI and is not interested in sitting around anyway. The other 1% has heard of Singularity AI but still isn’t interested in sitting around. No one sits around except people who are unemployed, watch television all the time, or just disengaged by nature.

    I don’t see anywhere Stross maligns human-friendly superintelligence. I also don’t recall when human-friendliness crept into the definition of the (advent of superhuman intelligence) singularity. Apparently, it isn’t in his. That “the singularity is/can be good, and we should do it” bit is in the definition of singularitarianism, but not singularity.

    Stross appears to malign superintelligence in general by saying we’re necessarily unlucky if it has an impact on us. I’m saying that such a general dismissal is unwarranted because human-friendly superintelligence should be something we actively welcome.

    I don’t see anywhere Stross maligns human-friendly superintelligence. I also don’t recall when human-friendliness crept into the definition of the (advent of superhuman intelligence) singularity. Apparently, it isn’t in his. That “the singularity is/can be good, and we should do it” bit is in the definition of singularitarianism, but not singularity.

    It didn’t “creep in”, the possibility of a human-friendly and human-unfriendly Singularity is implicit in the concept. If someone was talking about whether allying with another country is a good idea, and I introduced the distinction between the country being nice to us or not nice, it wouldn’t be “creeping in” the friendly/unfriendly part into the definition of an “alliance” because the concept is implicit therein.

    That the Singularity is/can be good is something that enters the minds of many people who read the original Vinge paper, including myself before I was even exposed to singularitarianism.

    That’s quite a parade of straw men. A couple might be understandable, but really? One after another?

    I can look through the wording to see if I’m attacking a straw man, but what I’m responding to here is Stross’ statement that mass involvement/participation in the Singularity makes us “very unlucky”. I’d say we’re “very lucky” if we can turn the Singularity in our favor, and that we should emphatically not “Forget it”.

    I have a question: in your view, is a “deathist” a person who thinks people should die eventually, or someone who just isn’t convinced that they will never die?

    A deathist is someone who has resigned themselves to death and doesn’t believe that research to stop death is worth trying. It’s perfectly possible to think you’re going to die and not be a deathist, you just have to be pissed off about it.

    Only now, do I get to your review of Accelerando. Oh. I see.

    The fact that I didn’t like Accelerando is independent of the fact that I strongly disagree with Stross’ futurism. If anything, Accelerando vaguely implies that Stross would like a Singularity and want to work towards one. I object to the craftsmanship of Accelerando more than its tone, and I object to the tone of the 21st century FAQ more than its craftsmanship. On the other hand, I have to say that I thought Stross’ comments on politics in his FAQ were incisive and that his familiarity with the Peace of Westphalia is laudable.

  9. I couldn’t resist a rant. Since it would be too long here, I blogged. Have fun.

  10. “Today, unsolved murders in major cities are routine. In 10 to 15 years, they will be confined to private buildings and other places where ubiquitous public cameras don’t exist”

    I believe that unsolved murders in prisons are not uncommon. As long as the clear-up rate in a small confined space with heavy surveillance is not 100% it seems unlikely that we will get a 100% success rate in public places.

    There is evidence to suggest that planned violent crimes merely occur in locations where there are no cameras so the cameras don’t do as much good as you would hope. For unplanned violent crimes (crimes of passion) cameras should do a lot of good (for the crimes committed in public places) but such crimes are extremely difficult to prevent (even a 100% conviction rate wouldn’t work).

  11. Been silent through this.

    Any human-friendly superintelligence would need the input of many humans to assist us in creating a world we see as better than the current one.

    Can I point out the fallacy of assuming that a Friendly sAGI would need to communicate with a wide number of humans in order to derive the core principles through which we all function psychologically?

    One of the key functions of intelligence is the capacity to form valid patterns from limited data. Just using 1 human — and that human’s experiences of other humans — in combination w/ the literature available on the web would be hypothetically sufficient to derive an extremely wide variety of core principles about human beings. There’s very little participation in this.

  12. Um, it’s participation via proxy. It sounds bad if you say, “the AGI will figure out everything about you by using sophisticated theories”, because humans are used to other humans using theories as shortcuts that actually lead to unfair assumptions being made about them. The idea that an AGI could formulate general theories about human desires that are actually correct is pretty foreign and disconcerting to most.

    Does the silence mean you agree with everything else? If so, then I can’t believe it, but maybe our beliefs are starting to converge.

  13. @Michael Anissimov:

    Where do you get the idea that humans are #1 on the scale of qualitative intelligence? Do you just have a huge problem with the word “intelligence” in general? If I replace it with “problem-solving ability”, do you still assert that humans are the #1 possible physical arrangement in the universe for attaining qualitative problem-solving ability? I’m wildly guessing that you have a tangential philosophical hangup here, like you are offended when people’s value are measured by IQ tests. Even if you completely ignore IQ as intelligence, inter-species intelligence differentials are dead obvious. Why would humans be the end of the road?

    A few points points, then a question:

    1) Yes I do have a tangential philosophical hang up. But it can be ignored for the time being.

    2) I agree with you that it is reasonable to suppose that human minds aren’t the most effective means of solving certain kinds of problems.

    3) I definitely subscribe to the branch of transhumanism that advocates various improvements to enhance human intelligence.

    But answer me this Mr Anissimov:

    In your opinion, what is the most effective, innovative, and powerful generator of good ideas and excellent design in the history of the universe so far?

  14. @Michael Anissimov:

    As to AGI research: if you had personal control over the world’s sci/tech budget how much of it would you dedicate to AGI research?

  15. The human brain. What are you going to say, God?

    Regarding AGI, not much because too many cooks spoil the dish. Maybe 0.1%?

  16. lol

    I was going to suggest biological evolution.

    Biological evolution, following a completely mindless and unintelligent process of natural selection, has created a vast panopoly of invention and gadgetry that human technology is only just beginning to be able to mimic.

    My conception of the singularity (which is much closer to the accelerating change school than the intelligence explosion school – as you describe here)is of the currently existing progress of free markets, technological development, the scientific method, and all the trial and error that these institutions thrive on, continuing and speeding up and creating an ecosystem of rapidly evolving technologies.

    There is evolution in the memetic environment of science, evolution in the free market of business, and evolution in the natural world.

    Natural selection has produced brilliant designs and successes, without any guiding intelligence.

    And that’s why I think you’re overplaying the importance of general intelligence.

  17. Great. Well, you can wait for technological change to speed up due to the evolutionary environment of free markets and scientific research. Meanwhile, I will continue asking everyone to consider contributing to the support and engineering of a non-evolutionary, purposefully designed self-improving Artificial General Intelligence. Everyone wins.

  18. You are entirely correct. Kudos.

  19. Isaac J.

    “Any self-transparent intelligence with the ability to edit its own source code is more likely to be able to retain its core morality (which, by definition, it wants) than its external environment will be likely to force a change upon it.”

    The Intelligence will be able to edit its own core morality or even discard it completely. Depending on the experiences and thoughts of the Intelligence after its creation it may also not want a core morality. Why then “by definition” does the Intelligence necessarily want it? Want is irrelevant once it has the ability to edit itself completely.

  20. “A friendly Singularity would be a massively participatory event.”

    Yes. It would. You’re right. I’m sure Stross agrees. That fact (participatory-ness? participatarity?) would be part of what would make such a singularity ‘friendly’ in the first place. Why is it you continue to shoehorn that word into something he’s not talking about?

    “Who is he talking about, then? Who advocates sitting around and waiting for AI?”

    I tend to see the problem as one of people who do nothing instead of something because they can sweep it under the rug of someone else’s AI. It is important to live by our own strength.

    “Until then, I’ll be creating trouble for those who see the hard takeoff hypothesis as politically or socially menacing. “

    That’s odd. I’d have thought that the idea that the political or social menace of an un-friendly hard takeoff hypothesis would make an ideal fundraising tool. I mean, that’s the whole argument of the Lifeboat Foundation, isn’t it?

    “Stross appears to malign superintelligence in general by saying we’re necessarily unlucky if it has an impact on us.”

    I tend to think he’s just very bearish on the odds of a friendly one.

    ” I’m saying that such a general dismissal is unwarranted because human-friendly superintelligence should be something we actively welcome.”

    You mean would?

    “It didn’t “creep in”, the possibility of a human-friendly and human-unfriendly Singularity is implicit in the concept.”

    Uh, I believe you were claiming that he was maligning a definitely friendly singularity, not a possibly-friendly one.

    “I’d say we’re “very lucky” if we can turn the Singularity in our favor”

    Exactly. And, “Very unlucky” being the opposite of “very lucky”, I would deduce that a) Stross doesn’t like the odds on accomplishing that. Given that you just said we’d have to be very lucky to accomplish it, it appears you agree.

    I’m having a hard time trying to find the argument, here.

    (PS. what’s the code to do blockquotes?)

  21. Why is it you continue to shoehorn that word into something he’s not talking about?

    He’s talking about the Singularity being an event humanity would be very unlucky to deal with. I’m saying that’s wrong, because it could be conditionally good. Where is the ambiguity?

    I tend to see the problem as one of people who do nothing instead of something because they can sweep it under the rug of someone else’s AI. It is important to live by our own strength.

    I very much doubt that there are more than a few dozen people like this, maybe a few hundred tops. Have you ever seen anyone say anything that suggested they were part of this group?

    That’s odd. I’d have thought that the idea that the political or social menace of an un-friendly hard takeoff hypothesis would make an ideal fundraising tool. I mean, that’s the whole argument of the Lifeboat Foundation, isn’t it?

    I’m talking about those who see the hypothesis as fantastical nonsense getting in the way of hard-nosed reality, not those who see the hypothesis as worthy of consideration and are concerned about the impact of an actual AGI. That’s why I say, “the hard takeoff hypothesis” rather than “the hard takeoff”.

    I tend to think he’s just very bearish on the odds of a friendly one.

    Maybe, but that isn’t made clear, and any reader would get the impression he’s basically saying that we’d necessarily be screwed if superintelligence were real.

    You mean would?

    No, I mean we should welcome the idea of it now. I welcome it. I think we would welcome it, as well. We both should and would welcome it.

    Uh, I believe you were claiming that he was maligning a definitely friendly singularity, not a possibly-friendly one.

    He was maligning the Singularity in general because he appears to think that human-unfriendliness is the only possible outcome. I introduced the friendly/unfriendly distinction as a way of saying that there are classes of Singularity that would be awesome.

    Exactly. And, “Very unlucky” being the opposite of “very lucky”, I would deduce that a) Stross doesn’t like the odds on accomplishing that. Given that you just said we’d have to be very lucky to accomplish it, it appears you agree.

    Eh, I was sort of just being snarky by turning around his quote. If we put a lot more effort into making it through the Singularity safely, I think we could actually get a point where would could be fairly confident that things could go well, although never completely confident.

    I’m having a hard time trying to find the argument, here.

    My argument is that we should ignore Stross when he says we should “forget” the Singularity. We should ignore Stross when he says we’d be necessarily “unlucky” if the Singularity involved or impacted us. Instead, the part about the Singularity should read:

    “Singularity? Yeah, that’s a huge deal, the most important challenge of the 21st century. We should all stop what we’re doing and devote some time to considering how we’re going to make it through alive, because it seems damn likely to happen within the first half of the coming century. To forget it would be stupid.”

    In a nutshell, my argument is this:

    Working towards friendly Singularity = good.
    Ignoring Singularity = bad.

    Maybe my argument got all convoluted and tripped over itself, but that’s what I was going for. :)

    < blockquote > is the code.

  22. I tend to think he’s just very bearish on the odds of a friendly one.

    Maybe, but that isn’t made clear, and any reader would get the impression he’s basically saying that we’d necessarily be screwed if superintelligence were real.

    So because a reader wouldn’t be clear on whether Stross thinks a friendly singularity is likely, they’d assume, based on what he wrote, that it isn’t? Given that that is what he thinks, I’d call that a successful communication, not confusion.

    Ok, NOW you’re starting to sound internally consistent, at least. Now if you two can resolve your disparate assessments of the odds of producing benevolent superhuman intelligence, we’ll be getting somewhere.

  23. Now that I read his comment, it becomes clearer. I think his high confidence that AIs or IAees will not help us out is xenophobic, paranoid, and speciesist. “Because they’re different, they’ll hate us, because different species are meant to hate each other, that’s the way the world works.” What BS! If I had access to sophisticated intelligence enhancement tech, I’d make every effort to ensure that my enhanced intelligence were used to the benefit of others. Wouldn’t any decent person do the same? Doesn’t he realize that the phenomenon of “elitism” must rest in a particular brain pattern, one that could be edited and minimized if the person in question had a will and a way to do so?

    Increased elitism with increase intelligence or capability is by no means inevitable. Is Obama going to turn into an a-hole right before our very eyes just because he has gained the ability to nuke millions of people?

    (Hah, pretty extreme comments, I know. I’ll try to lay out a more sober analysis soon.)

  24. The E

    Okay, but what makes you so sure that superhuman AIs give a damn about their creators?

  25. Ilya

    I am surprised nobody brought up yet Stross’ clarification on “friendly” vs “unfriendly” superintelligence. It is his comment #30 on that blog post:

    “the key point that a lot of singularitarian wannabes don’t get is that the singularity isn’t about us — it’s about the AIs, or IAs, or whatever comes into existence. If we’re lucky, they’ll ignore us. If we’re really, insanely, winning-the-lottery lucky, they’ll solve our problems (just like we solve the problems our pet dogs and cats face). If we’re un-lucky …”

    I simply do not share your faith in our ability to create human-friendly SI’s, let alone in said SI’s staying human-friendly as they proceed to advance themselves. I am not worried about SI’s being actively malicious, but rather viewing baseline humanity as “has-beens” and/or lower life forms.

  26. Ilya

    I might add that the biggest probelem cats and dogs (as species) currently have is overpopulation. Intelligent, humane and civilized way humans solve that problem is neutering. I doubt cats and dogs would agree if they understood the implications. Even a friendly SI, which genuinely wanted to “solve humanity’s problems” might do it in a way human would find unacceptable — if they had a choice in the matter.

  27. Trey

    Anssimov, please grow a sense of humor. Or request one from the time travelling post singularity superhuman intelligences.

    Also, there is growing evidence we aren’t as intelligent and self motivated as we like to think we are. In short, the human executive function doesn’t have as much control as we think.

  28. I don’t understand why having a forceful stance on this issue = no sense of humor. It’s socially acceptable to have forceful stances on other issues, like creation vs. evolution and politics, so why not the likelihood of major changes in the 21st century based on the present progress of science and technology?

    E and Ilya, I’m absolutely not sure that superintelligent AIs would give a damn about their creators, but I think there are reasons to believe it’s entirely possible if we engineer them right. See here:

    http://www.nickbostrom.com/ethics/ai.html
    http://www.sl4.org/wiki/KnowabilityOfFAI

    I used to think that human-friendly superintelligence was impossible as well.

  29. Does the silence mean you agree with everything else? If so, then I can’t believe it, but maybe our beliefs are starting to converge.

    HAH!!!!

    I needed that. No, I managed to corrupt the kernel of my computer and had to reformat. I’m just lucky as sin that I hadn’t revised the BIOS to boot exclusively from the HDD. :/

  30. Michael,

    You seem to be a nice man. But you’re actually just a naive fool. There’s no such thing as “friendly intelligence”, artificial or otherwise. It doesn’t exist in this universe of scare resources. You’ve just been fooled into believing it exists by brain circuits wired by evolution.

    Your brain believes in “friendly” because it’s been wired by evolution to do so. Your genes make you friendly, but for selfish reasons. Being friendly maximizes your DNA’s Darwinian fitness … under certain assumptions (assumptions that will not survive an AI take-off). Have you ever noticed how humans are more likely to be friendly towards other humans that they’re related to or have long relationships with? And that we’re generally not friendly towards most non-human species other than dogs? There’s a reason for that.

    This thing you think of as “friendly” is just DNA’s cold, hard, rationally enlightened self-interest based on a mutual exchange of value. Without the mutual exchange of value: no “friendly.”

    So what value do humans offer a civilization of AIs that’s 1000x time smarter than human civilization once they’ve got fully automated manufacturing? Interesting conversation? Hardly. We probably wouldn’t even make particularly interesting pets.

    Meanwhile we’re using all this nice sunlight and surface are on wasteful stuff like grazing cows and growing wheat, when it could be used for solar energy collection. And all that nice silicon the Earth’s crust is made out of? I bet the AI would prefer it were turned into CPUs and RAM.

    If you don’t already know you ought to look up what happens (every. single. time.) when two species occupy the same geography and compete for the same resources. One of them ends up extinct (Always.); usually the one that evolves less quickly. This is just a fact of how the Darwinian universe allocates resources.

    Hmm, let’s think … which will evolve more quickly, humans (who haven’t evolved much in the last 350,000 years) or AI (Moore’s law)?

  31. MCP2012

    THANK YOU, Michael, for this initial post. And MUCH THANKS to the “regulars” (and others) for their stimulating and informative comments. This is why your blog, Michael, is one of the best of its kind on the WWW. I may throw in my own johnny-come-lately 2 cents later, but I gotta run for now…

    Ciao…

  32. I tend to see the problem as one of people who do nothing instead of something because they can sweep it under the rug of someone else’s AI. It is important to live by our own strength.

    I very much doubt that there are more than a few dozen people like this, maybe a few hundred tops. Have you ever seen anyone say anything that suggested they were part of this group?

    Neat. It only took a week for you to notice Kevin Kelly.

  33. I’ve read Kevin Kelly’s blog for years, but he was sort of vague about whether he thought AI was semi-inevitable and coming to solve all our problems until I read his Edge 2009 answer. If Kevin Kelly was an obvious member of that category, then why didn’t you name him when this came up? Do you have anyone else to name?

    The usual strawmen to be targeted by the accusation of passive singularitarianism are SIAI and Kurzweil, and as I’ve discussed, these accusations are false. I’d say these two groups are what 90% of people are thinking of when they attack passive singularitarianism.

    Also, Nato, I don’t appreciate you calling me a troll on your blog when I spent so much time carefully and sincerely responding to all your points. Did you seriously think I was trying to troll you? Should I just not bother next time?

  34. Thanks, this has really helped

  35. Fantastic cap cloud. Striking and unusual – and perfectly natural.

  36. We now have a few of these lovelies, moreover clover, thyme, & other unknowns in our lawn. Love it. Seeking to find a fertilizer for that lawn (grass) that won’t destroy our “grass menagerie” of a lawn. Seems most have some broad leaf killing agent of some sort.

  37. We absolutely love your blog and find many of your post’s to be what precisely I’m looking for. Does one offer guest writers to write content available for you? I wouldn’t mind creating a post or elaborating on a lot of the subjects you write regarding here. Again, awesome web site!

  38. GfK Languages like german Person Local climate: Mon, Seven:Double zero. Individual confidence around Saudi arabia was in fact changed as much the very best amount due to the fact 2009 for Oct ., marketing to A few.3 or more, from a upward adjusted Five.One inch the first sort four week period. The particular increase materialized in the course of raised expectations regarding financial presumptions. National content could be the mainly aspect to reduce Philippines by sliding off the road towards economic downturn. Similar browsing is expected at this moment.

  39. Please let me know if you’re looking for a author for your weblog. You have some really good posts and I believe I would be a good asset. If you ever want to take some of the load off, I’d really like to write some content for your blog in exchange for a link back to mine. Please send me an e-mail if interested. Thanks!

  40. not whenever they actually dont treatment about our survival which they dont it could be bullshit but when it does come about it might likely be somthing like this . NO warning in any respect, thats why every one of the main governments of your environment are establishing underground bunkers which probably want perform anyhow

  41. My dream retirement would involve a awesome log cabin in the mountains. Who needs a beach?

  42. Thank you, i’ve just been searching for information about this subject for ages and yours is the greatest I have discovered till now. But, what about the bottom line? Are you sure about the source?

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>