Cory Doctorow is an editor of what, for a long time, was the most popular blog on the Internet, Boing Boing. (It's now #2 after Engadget.) He is also a science fiction author who is known for copyright activism on behalf of the Electronic Frontier Foundation. In the Spring 2003 issue of Whole Earth Magazine, an article of his was published, "The Rapture of the Geeks", that ripped into advocates of the Singularity and intelligence enhancement, such as myself. I will respond to the central accusations.
First, a couple definitions. The Singularity is the technological creation of smarter-than-human intelligence. We can further specify good Singularities, where this smarter intelligence is on humanity's side, and bad Singularities, where it isn't. So-called Singularitarians are individuals who advocate intelligence enhancement for global benefit. Rather than tackling the really hard problems - poverty, war, hatred, poor infrastructure, mental and physical illness - at our present level of intelligence, Singularitarians advise pursuing intelligence enhancement and then applying qualitatively smarter intelligence to these age-old problems. We also foresee a recursive self-improvement process resulting from smarter-than-human intelligence, where the first superintelligence is much better than humans at coming up with new intelligence enhancement techniques, and applies them iteratively, further magnifying the initial gains. To a Singularitarian, intelligence enhancement that improves benevolence as well as brainpower is the best possible investment in humanity's future.
Cory Doctorow: "The Vingean Singularity is at the center of a classic mystical belief system: to believe in The Singularity is to believe in the transcendence of human flesh and the ascension to a higher stateâ€”a belief that, in turn, depends on several highly dubious articles of faith."
Here, Doctorow associates the Vingean Singularity with transcending the flesh. While it is true that many advocates of the Singularity believe in the possibility of mind uploading, cyborgization, etc., none of these things are necessary to make intelligence enhancement a highly desirable prospect. Even if life and intelligence were somehow permanently affixed to proteinaceous water envelopes forever, it would still would be prudent to pursue intelligence enhancement for its own sake.
Humans share 98% of our genetic material with chimps, but the step from chimps to humans produced creatures that could walk on the Moon, exploit the power of the atom, and build skyscrapers. If a similar jump up in intelligence could produce the same discontinuous results, then wouldn't it be fascinating to take that step? And if that step is theoretically possible and will happen one day anyway, wouldn't it to responsible of us to help guide it so that the first superintelligences at least are given human-friendly initial conditions rather than human-unfriendly initial conditions? If the Singularity were sparked by human intelligence enhancement, would you rather the first augmentee be more like Fred Rogers or Ted Bundy?
There are multiple reasons why the Singularity is not a mystical belief system, but the most obvious is that it is experimentally testable. If we cannot build smarter-than-human AIs despite our best efforts and human-equivalent computing capacity, if no brain-computer interface or genetic enhancement project gives rise to improved intelligence, then it will be proven that smarter-than-human intelligence is forbidden by the laws of the universe.
But if a reliable intelligence enhancement procedure is developed and can be applied to anyone for a low cost, would that not be an "ascension to a higher state" of just the type that Doctorow is belittling? It would be an ascension to a "higher state" in the real world, based on making deliberate neural modifications to let people think faster, more creatively, more empathically, with expanded working memory and capacity for complexity. This can be done by taking control of our own brains at the physical level, rather than the more superficial route (but so far the best we've had) of traditional learning.
Doctorow: "First off, Singularians ask you to believe that a model of a brain in a computer, properly executed, will become consciousâ€”will, in fact, have a consciousness continuous with that of the person whose brain was scanned. While itâ€™s true that consciousness depends on the brainâ€” judicious experimentation with a bone saw and scalpel can readily demonstrate thisâ€”itâ€™s an enormous leap to conclude that consciousnessâ€™s seat is in the brain."
Where else could it be? Doctorow is contradicting decades of work in brain science by suggesting that consciousness may reside somewhere external to the brain. Does consciousness reside in the stomach? The heart perhaps? Or is it floating by us at all times, on the supernatural plane? I'm not sure what he is suggesting, but it sounds pseudoscientific.
Regardless, Singularitarians are not asking anyone to believe that a model of the brain in a computer is continuous with its real-life counterpart, or that uploading is possible. It just so happens that many do believe it, but this seems like more of a common belief than a central component of Singularity advocacy, namely intelligence enhancement.
But if the subject is brought up, why not respond: if consciousness disappears when a brain is implemented on a computer, then it should be possible to observe consciousness disappearing in partly-computerized brains. For instance, people with hippocampal implants would be less conscious than ordinary human beings. Somehow I doubt this. Even if computers as we know them turn out not to be able to simulate conscious beings, who says we are limited to traditional computers based on the serial von Neumann architecture implemented in silicon? We could try parallel computers, biological computers, ultrafast neuron-equivalents, carbon computers... whatever works. The point is not to have a philosophical shouting match, but to dismiss carbon chauvinism - the idea that all life or intelligence must depend on traditional biological building blocks. For a humorous angle on this, see the short story "They're Made Out of Meat" by Terry Bisson.
Doctorow: "Then thereâ€™s the further presumption that consciousness exists at an atomic or even molecular level: that an atom-by-atom copy of the brain, properly modeled in a Turing Machine, will have all the data necessary to awaken into consciousness. As more and more subatomic particles are catalogued, particles whose properties range from counterintuitive to goddamned spooky, it seems equally probable that nano-disassemblersâ€™ pincers will be far too clumsy to ever extract the important information contained in a brain. Itâ€™s cargo-cultism: the airstrips bring the airplanes, so if we lay down airstrips, the planes will come back. Put all a brainâ€™s atoms into a brain-shaped pile, a mind will come back."
How come embryogenesis keeps putting the brain's atoms in a brain-shaped pile a third-million times every day, and the mind keeps coming back? If it is necessary to duplicate pregnancy rituals in order to create a conscious being in a non-carbon substrate, then it will be an inconvenience we'll have to bear. Again though, the issue of whether or not a human brain can be uploaded is irrelevant to the primary issue of whether or not intelligence enhancement (the Singularity) is worth pursuing.
Doctorow: "The Singularity depends on hypothetical technological events â€” nanotech, brain scanning, consciousness in the brain, sufficient granularity in the scanâ€”but none is more wishful than the belief that the correct model will be lucked into."
It's interesting that Doctorow set out to write an article on the Singularity and ended up writing an article on mind uploading. This, along with his use of the term "Singularian" when he means "Singularitarian", shows that he didn't really research this article very well, but probably wrote it as a reaction to something he read that conflated the Singularity and mind uploading. A Google search for "Singularian" only yields a few results - all instances where people made up the word erroneously. "Singularian" is sort of like "irregardless" - a made up, etymologically incorrect word that gets spread about on a limited scale through repetition.
Doctorow: "After The Singularity, weâ€™ll be immortal. All goods will be nonscarce. Entropy will be tamed. We will have complete mastery over our selves and our environment â€” weâ€™ll be ascended masters. The best part is, weâ€™ll get there using Mooreâ€™s Law. Write code, get smart, advance the cause and soon, you, too, will be immortal."
This seems like an attack on Ray Kurzweil's particular views, but it is a straw man because by definition, we cannot know precisely how things will go after the Singularity. The standard view is simple: if we aggressively enhance our own intelligence, the benefits could be large. Because most intelligence enhancement advocates are also transhumanists, the ideas of molecular manufacturing and radical life extension co-occur with discussions of the Singularity, but they are not the same thing. It's hard to tell if Doctorow picked this up while onstage at the Singularity Summit at Stanford, but I sure hope so.
Doctorow: "Your mystical belief: that everything will just transform on its own, for the infinitely better, because, well, because thatâ€™d rock."
From the beginning of an organized movement in support of the Singularity (around 2000), there has been the idea of personal responsibility and direct activism. So, as far as I know, there are zero thinkers on the Singularity that think it would be totally inevitable. Support of routes to smarter intelligence is a pro-active thing, that primarily manifests itself in Artificial Intelligence projects.