Yesterday, Good posted the seventh and second-to-last installment of myself and Roko’s series on the Singularity, “Criticisms of the Singularity”. (My last contribution to the series, “The Benefits of a Successful Singularity”, was promoted to the front page of Digg.) For your benefit, the complete article is reproduced here.
Part seven in a GOOD miniseries on the singularity by Michael Anissimov and Roko Mijic. New posts every Monday from November 16 to January 23.
As was previously discussed in our series, the “singularity” means the creation of smarter-than-human intelligence, or “superintelligence,” a type of intelligence that is impressively more intelligent than humans. Possible methods for its creation include brain-computer interfaces and pure artificial intelligence, among others. Various scientists, futurists, and mathematicians that write about the singularity, such as Ray Kurzweil, Nick Bostrom, and Vernor Vinge, consider such an event plausible sometime between about 2025 and 2050. Among those who consider the singularity plausible, it is widely agreed that the event could alter the world, our civilization, and even our bodies and minds profoundly, through the technologies that superintelligence could create and deploy.
Because the singularity is such a new and speculative idea, and the subject of little academic study, there are people that take practically every imaginable position with respect to it. Some, unfamiliar and shocked by the idea, dismiss it outright or simply react with confusion. Others, such as philosopher Max More, dismiss some of the central propositions after more careful study. A substantial number embrace it openly and without too many qualifications, such as futurist Ray Kurzweil, who seems to expect a positive outcome with a very high probability. My organization, the Singularity Institute, and related thinkers such as philosopher Nick Bostrom, see a positive outcome as possible but not without very careful work towards ensuring that superintelligences retain human-friendly motivations as they grow in intelligence and power.
Criticisms of the singularity generally fall into two camps: feasibility critiques and desirability critiques. The most common feasibility critiques are what I call the Imago Dei objection and the Microsoft Windows objection. Imago Dei refers to Image of God, which is the doctrine that humans are created in God’s image. If humans are really created in the image of God, then we must be sacred beings, and the idea of artificially creating a superior being becomes dubious-sounding. If such a superior being could be possible, then wouldn’t God have created us that way to begin with? Unfortunately for this view, science, experimental psychology, and common sense have revealed that humans possess many intellectual shortcomings, and that some people have more of these shortcomings than others. Human intelligence isn’t perfect as it is; long-term improvements may become possible with new technologies.
The Microsoft Windows objection often surfaces when the topic of superintelligent artificial intelligence is brought up and goes something like this: “How can you be expecting superintelligent robots in this century when programmers can’t even create a decent operating system?” The simple answer is that too many cooks ruin a dish, and operating systems are plagued by a huge number of programmers without any coherent theory that they can really agree on. In other fields, such as optics, aerospace, and physics, scientists and engineers cooperate effectively on multi-million dollar projects because there are empirically supported theories that restrict many of the final product parameters. Artificial intelligence can reach the human level and beyond if it one day has such an organizing theory. At the present time, no such theory exists, though there are pieces that may fit into the puzzle.
Lastly, there are desirability critiques. I am very sympathetic to many of these. If we humans build a more intelligent species, might it replace us? It certainly could, and evolutionary and human history support this possibility strongly. Eventually creating superintelligence seems hard to avoid though. People want to be smarter, and to have smarter machines that do more work for us. Instead of trying to stave off the singularity forever, I think we ought to study it carefully and make purposeful moves in the right direction. If the first superintelligent beings can be constructed such that they retain their empathy for humanity, and wish to preserve that empathy in any future iterations of themselves, we could benefit massively. Poverty and even disease and aging could become things of the past. There is no cosmic force that compels more powerful beings to look down upon weaker beingsâ€”rather, this is an emotion that comes from being animals built by natural selection. In the context of much of natural selection it is evolutionarily advantageous to selectively oppress weaker beings, though some humans, such as vegans, have demonstrated that genuine altruism and compassion are possible.
In contrast to Darwinian beings, superintelligence could be engineered for empathy from the ground up. A singularity originating with enhanced human intelligences could select the most compassionate and selfless subjects for radical enhancement first. An advanced artificial intelligence could be built with a deep, stable sense of empathy and even lacking an observer-centered goal system. It would have no special desire to discard its empathy because it would lack the evolutionary programming that causes that desire to surface to begin with. The better you understand evolution and natural selection, the less likely you think it is for Darwinian dynamics to apply to superintelligence.
We should certainly hope that benevolent or human-friendly superintelligence is possible, or human extinction could be the result. Just look at what we’re already doing to the animal kingdom. Yet, by thinking about the issues in advance, we may figure out how to tip the odds in our favor. Human-posthuman synergy and cooperation could become possible.
Michael Anissimov is a futurist and evangelist for friendly artificial intelligence. He writes a Technorati Top 100 Science blog, Accelerating Future. Michael currently serves as Media Director for the Singularity Institute for Artificial Intelligence (SIAI) and is a co-organizer of the annual Singularity Summit.