I usually don't read Charles Stross' blog, but I saw a post linked from Brian Wang's blog, which I read all the time, so I checked it out. It's Stross' "21st century FAQ", where he says the 21st century will be "pretty much what you read about in New Scientist every week", something I would just laugh at and ignore if it weren't the case that so many transhumanists read Stross' books. It's important to criticize this statement because the way we handle the 21st century will be based on how we anticipate it, and if we expect it to be more of the same, we'll be blindsided by the civilization-transforming changes to come.
After saying everything will pretty much be the same for the next 91 years, Stross mentions "unknown unknowns", which are "possible sources of existential surprise", and points to "biotechnology, nanotechnology, AI, climate change, supply chain/logistics breakthroughs to rival the shipping container, fork lift pallet, bar code, and RFID chip". The interesting thing is that many of these so-called "unknown unknowns" are not very unknown at all. Yes, they could unfold in unknown ways, but we see press releases and headlines every single day that point to near-term major advances in all these areas (except climate change -- that's not really a technology). Brian Wang does a particularly good job of covering these at his blog, as do the people behind the KurzweilAI.net news feed, and of course the science super-sites of PhysOrg and Eurekalert.
There is no huge surprise. Biotechnology, nanotechnology, AI, and major advances in supply chain/logistics will predictably deliver massive disruptive changes in the next 25 years or less. Synthetic biology is taking off as we speak, and nanotechnology will soon begin having a major impact that will become abruptly obvious in our daily life. Artificial Intelligence is less predictable, but the AI winter has long thawed and the way that affordable computing is approaching the computational capacity of the human brain demands attention. The combination of cheap ubiquitous sensors and facial-recognition AI in the late 2010s or early 2020s will lead to a huge wake-up call about the inevitability of transparency. Today, unsolved murders in major cities are routine. In 10 to 15 years, they will be confined to private buildings and other places where ubiquitous public cameras don't exist.
Synthetic biology will lead to major energy breakthroughs by 2025. Any day now there will be an announcement that Mycoplasma laboratorium exists and is self-replicating in a petri dish, and the era of artificial life will begin. After that, the organism will be custom-tweaked for producing biofuels, or whatever else its creators can implement using the synthetic biology toolbox. The "wonder bacterium" and "Microbesoft" may not emerge overnight, but will emerge in 10 to 20 years, tops. To say that there will not be a major, world-transforming technological revolution from synthetic biology in the next 91 years is to massively underestimate the disruptive potential of rewriting the book of life. This is not an "unknown unknown". This is predictable massive disruption. Fossil fuels will still be used because of their excellent energy density, but bio-manufactured fuels will grow to 50% of the energy pie or greater.
Progress in nanotechnology will either speed up to a pace reminiscent of the microchip revolution in the 1980s and 1990s, if molecular nanotechnology is difficult or impossible, or to a pace greater than 10 times that of the microchip revolution, if molecular nanotechnology can be achieved in the next couple decades. If MNT based on engineering principles and rigid nanostructures proves impossible, then another type of MNT based on bio-inspired designs will emerge instead, somewhat later than hoped for, but certainly before the end of the century. This will permit high-throughput, decentralized, personalized manufacturing for dirt cheap all over the world, and is likely to happen before 2040 or 2050, not 2100.
The only events which will stop these massively disruptive technological milestones are either comprehensive planetary backlashes or a Singularity from recursively self-improving Artificial Intelligence or Brain-Computer Interface-derived superintelligence. That's the other thing in Stross' post that I strongly disagree with: he calls the Singularity the "rapture of the nerds" and says it "is likely to be a non-participatory event for 99.999% of humanity â€” unless we're very unlucky. If it happens and it's interested in us, all our plans go out the window". It seems unlikely that Stross could annoy me more with these two sentences if he custom-designed them for my annoyance. Issues:
1) The Singularity is not "the Rapture of the Nerds". It is a very likely event defined as the technological creation of greater-than-human intelligence. Its likelihood comes from two facts: that intelligence is inherently something that can be engineered and enhanced, and that the technologies capable of doing so already exist in nascent forms today. Even if qualitatively higher intelligence turns out to be impossible, the ability to copy intelligence as a computer program or share, store, and generate ideas using brain-to-brain computer-mediated interfaces alone would be enough to magnify any capacity based on human thought (technology, science, logistics, art, philosophy, spirituality) by two to three orders of magnitude if not far more.
2) If superintelligence were created, how could it possibly keep itself to 99.999% of humanity? Either it will not be created and impact 0% of humanity or it will be created and impact 100%. Inventions created by even average human engineers and scientists have found their way to every corner of the Earth. How could inventions created by qualitatively superior intelligence not find usage around the world? If the superintelligence were created using process X, what is to stop the creators of the process from applying it to anyone who is sufficiently interested? Even if they kept it to their little group, they would likely impact 100% of humanity in a negative way, by monopolizing the world economy.
3) Stross hints that if the Singularity impacts more than 0.001% of humanity, we're necessarily unlucky. Maybe this is because he believes in the strong version of the Event Horizon thesis, where the Singularity is necessarily unknowable and therefore bad? Why can't unknowable be good? In any case, I disagree with the strong Event Horizon thesis -- in this formulation of the Singularity ("everything is unknowable, and let's not try to know it"), the outcome is equally good/bad whether the superintelligence (SI) that sparks the Singularity is derived from Hitler or Ghandi. This is false, so Stross' conception of the Singularity is false. Even a supreme SI will be a product of its starting motivations, even if those motivations have folded over themselves a million times. Once you realize that facts and values are fundamentally different things, you see that values are arbitrary and facts are not, so a variety of different SIs with varying moralities are possible, though we have an interest to create human-friendly SIs only and expect those SIs to prevent the creation of human-unfriendly SIs. The starting motivations of an intelligence will necessarily be somewhat preserved by the intelligence as it bootstraps itself to superintelligence. We can maximize the probability of a beneficial outcome by creating Friendly AI or coming up with some human intelligence enhancement scheme that reinforces benevolence. Not knowing the outcome for sure is not an excuse not to try, and there are reasons to believe that an attempt at benevolence will pay off. Any self-transparent intelligence with the ability to edit its own source code is more likely to be able to retain its core morality (which, by definition, it wants) than its external environment will be likely to force a change upon it.
The Singularity is not necessarily some inscrutable superintelligent monolith. It would be a being or collection of beings which could either make our lives radically better or snuff them out very quickly. Ensuring the former is humanity's #1 priority right now. Rising sea levels and warming ecosystems, while environmentally very troubling, are not nearly as serious as the whole of mankind being annihilated by an entity smarter than it.
The impact of superintelligence could be entirely antithetical to its origin: pre-Singularity technology. Like does not necessarily proceed from like. A flower does not look like a seed. The products of superintelligence need not look like the supercomputer it was originally run on. A superintelligence working to reshape the world in a way that humans actually like might green the deserts, clean the environment, and promote decentralization so that humans could enjoy the planet more fully and utilize its space in the most parsimonious possible way. Take Stross' idealized future, the 2100 he would prefer more than any other option, and a superintelligence could conceive that and help implement it, also considering the idealized futures of every other human being on the planet. If humans are useful for thinking of and implementing positive futures, then a superintelligence will be even more useful. To malign human-friendly superintelligence would be to malign humans in the very same sentence, because anything that human-friendly humans could do, human-friendly superintelligence could do better. (Of course, you could malign the very idea of superintelligence if you think it's impossible, but if you do accept the premise, then you cannot reject human-friendly superintelligence without rejecting humanity itself.)
All our plans will predictably go out the window because the Singularity (superintelligence) will predictably be created in the 21st century. The only "unknown" part is whether we create superintelligence with initial motivations that cause it to self-improve and wipe us out or with motivations that cause it to self-improve and be on our side. If there's no way of setting up initial motivations such that it helps us, then we are doomed unless we install a worldwide totalitarian government to outlaw all computers and prevent the creation of superintelligence forever. Good luck.
Stross says, "If [the Singularity] it doesn't happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea". Is it really "sitting around" if you're actively working towards AI by spreading knowledge and raising support and funds, or learning math or cognitive science in an attempt to formulate a theory that actually has a chance of working with present-day computing power? Hardly. It's not as if AGI research is taking up a major proportion of the NSF budget or venture capital dollars -- even on the principle of "we should try as many routes to helping the world as possible" alone, it seems worthwhile to support AI research for potentially leading to solutions to major problems. Even if AGI research magically fails because the aliens running our simulation keep messing up the software at the last second, it could lead to applications and software programs that provide tremendous assistance in moving us in the direction of more advanced energy technologies, resource utilization schemes, monitoring synthetic biology for rogue biohackers, and the environment for invasive organisms.
Stross implies that there are people sitting around waiting for AI to happen. Who? Even Kurzweil says that AI will emerge out of the hard work of millions of people and thousands of companies. The notion that there are people around Waiting for the Rapture is an invention of Stross and Doctorow, designed to throw more flair into their science fiction stories. Such people are fictional characters, and thatâ€™s how theyâ€™ve been used. They donâ€™t exist in the real world. The "nerds" who are "waiting for the Rapture" seem suspiciously often to be entrepreneurs and leaders in science and technology.
Stross says the big picture of the 21st century will be that most of us live in cities, an observation so tepid that it makes one wonder how he is a science fiction writer. What about immersive virtual reality? Or the human species taking control of its own evolution? Or software and robotic systems so advanced that many of today's jobs are rendered obsolete? Or the way that we'll be able to observe the neural correlates of the details of human experience and look closely into our own dreams and thoughts? In Stross' 21st century, will none of these things actually happen? Good thing science and technology aren't bounded by the imaginations of humans in 2009.
At the end of the FAQ, Stross asks, "Are we going to survive?", then answers, "No â€” in the long run, we are all dead. That goes for us as individuals and as a species." This is same boring deathist drumbeat that I've been rallying against since I could conceive of the human body as a machine that can be repaired in principle, and especially since I co-founded the Immortality Institute in 2002. This is the defeatist mantra that evaporates the second you hear Aubrey de Grey give a talk, or read a news release about aging being stopped in an entire organ, or view the list of prestigious scientists that support the mission of the Methuselah Foundation. Charles Stross may be a pessimist about achieving longevity escape velocity, but the rest of us are going to fight a War on Aging, and receive the constant attention of the world media en route to success.
(I decided to remove an entire section here insulting Stross' sci-fi writing, because I realized it was inappropriate to mix criticism of futurism and criticism of fiction in the same post.)
Maybe Stross is a great guy in person. I don't know him. But I can say that I wildly disagree with both his futurism and his approach to sci-fi. (Insofar as I care about sci-fi at all, which, honestly, is not a whole lot.)