Over at the Speculist, Phil Bowermaster understands the points I made in "Yes, the Singularity is the biggest threat to humanity", which, by the way, was recently linked by Instapundit, who unfortunately probably doesn't get the point I'm trying to make. Anyway, Phil said:
Greater than human intelligences might wipe us out in pursuit of their own goals as casually as we add chlorine to a swimming pool, and with as little regard as we have for the billions of resulting deaths. Both the Terminator scenario, wherein they hate us and fight a prolonged war with us, and the Matrix scenario, wherein they keep us around essentially as cattle, are a bit too optimistic. It's highly unlikely that they would have any use for us or that we could resist such a force even for a brief period of time -- just as we have no need for the bacteria in the swimming pool and they wouldn't have much of a shot against our chlorine assault.
"How would the superintelligence be able to wipe us out?" you might say. Well, there's biowarfare, mass-producing nuclear missiles and launching them, hijacking existing missiles, neutron bombs, lasers that blind people, lasers that burn people, robotic mosquitos that inject deadly toxins, space-based mirrors that set large areas on fire and evaporate water, poisoning water supplies, busting open water and gas pipes, creating robots that cling to people, record them, and blow up if they try anything, conventional projectiles... You could bathe people in radiation to sterilize them, infect corn fields with ergot, sprinkle salt all over agricultural areas, drop asteroids on cities, and many other approaches that I can't think of because I'm a stupid human. In fact, all of the above is likely nonsense, because it's just my knowledge and intelligence that is generating the strategies. A superintelligent AI would be much, much, much, much, much smarter than me. Even the smartest person you know would be an idiot in comparison to a superintelligence.
One way to kill a lot of humans very quickly might be through cholera. Cholera is extremely deadly and can spread very quickly. If there were a WWIII and it got really intense, countries would start breaking out the cholera and other germs to fight each other. Things would really have to go to hell before that happened, because biological weapons are nominally outlawed in war. However, history shows that everyone breaks the rules when they can get away with it or when they're in deep danger.
Rich people living in the West, especially Americans, have forgotten the ways that people have been killing each other for centuries, because we've had a period of relative stability since WWII. Sometimes Americans appear to think like teenagers, who believe they are apparently immortal. This is a quintessentially ultra-modern and American way of thinking, though most of the West thinks this way. For most of history, people have realized how fragile they were and how aggressively they need to fight to defend themselves from enemies inside and out. With our sophisticated electrical infrastructure (which, by the way, could be eliminated by a few EMP-optimized nuclear weapons detonated in the ionosphere), nearly unlimited food, water, and other conveniences present themselves to us on silver platters. We overestimate the robustness of our civilization because it's worked smoothly so far.
Superintelligences would eventually be able to construct advanced robotics that could move very quickly and cause major problems for us if they wanted to. Robotic systems constructed entirely of fullerenes could be extremely fast and powerful. Conventional bullets and explosives would have great difficulty damaging fullerene-armored units. Buckyballs only melt at roughly 8,500 Kelvin, almost 15,000 degrees Fahrenheit. 15,000 degrees. That's hotter than the surface of the Sun. (Update: Actually, I'm wrong here because the melting point of bulk nanotubes has not been determined and is probably significantly less. 15,000 degrees is roughly the temperature that a single buckyball apparently breaks apart at. However, some structures, such as nanodiamond, would literally be macroscale molecules and might have very high melting points.) Among "small arms", only a shaped charge, which moves at around 10 km/sec, could make a dent in thick fullerene armor. Ideally you'd have a shaped charge made out of a metal with extremely high mass and temperature, like molten uranium. Still, if the robotic system moved fast enough and could simply detect where the charges were, conventional human armies wouldn't be able to do much against it, except for perhaps use nuclear weapons. Weapons like rifles wouldn't work because they simply wouldn't deliver enough energy in a condensed enough space. To have any chance of destroying a unit that moves at several thousands of mph and can dodge missiles, nuclear weapons would likely be required.
When objects move fast enough, they will be invisible to the naked eye. How fast something needs to move to be unnoticeable varies based on its size, but for an object a meter long it's about 1,100 mph, approximately Mach 1. There is no reason why engines could not eventually be developed that propel person-sized objects to those speeds and beyond. In this very exciting post, I list a few possible early-stage products that could be built with molecular nanotechnology that could take advantage of high power densities. Google "molecular nanotechnology power density" for more information on the kind of technology a superintelligence could develop and use to take over the world quite quickly.
A superintelligence, not being stupid, would probably hide itself in a quarantined facility while it developed the technologies it needed to prepare for doing whatever it wants in the outside world. So, we won't know anything about it until it's all ready to go.
We'll still be stuck in the blue region while superintelligences develop robotics in the orange and red regions and have plenty of ability to run circles around us. There will be man-sized systems that move at several times the speed of sound and consume kilowatts of energy. Precise design can minimize the amount of waste heat produced. The challenge is swimming through all that air without being too noticeable. There will be tank-sized systems with the power consumption of aircraft carriers. All these things are probably possible, no one has built them yet. People like Brian Wang, who writes one of the most popular science/technology blogs on the Internet, take it for granted that these kind of systems will eventually be built. The techno-elite know that these sorts of things are physically possible, it's just a matter of time. Many of them might consider technologies like this centuries away, but for a superintelligence that never sleeps, never gets tired, can copy itself tens of millions of times, and parallelize its experimentation, research, development, and manufacturing, we might be surprised how quickly it could develop new technologies and products.
The default understanding of technology is that the technological capabilities of today will pretty much stick around forever, but we'll have spaceships, smaller computers, and bigger televisions, perhaps with Smell-O-Vision. The future would be nice and simple if that were true, but for better or for worse, there are vast quadrants of potential technological development that 99.9% of the human species has never heard of, and vaster domains that 100% of the human species has never even thought of. Superintelligence will happily and casually exploit those technologies to fulfill its most noble goals, whether those noble goals involve wiping out humanity, or maybe healing all disease, aging, and creating robots to do all the jobs we don't feel like doing. Whatever its goals are, a superintelligence will be most persuasive in arguing for how great and noble they are. You won't be able to win an argument against a superintelligence unless it lets you. It will simply be right and you will be wrong. One could even imagine a superintelligence so persuasive that it convinces mankind to commit suicide by making us feel bad about our own existence. In that case it might need no actual weapons at all.
The above could be wild speculation, but the fact is we don't know. We won't know until we build a superintelligence, talk to it, and see what it can do. This is something new under the Sun, no one has the experience to conclusively say what it will or won't be able to do. Maybe even the greatest superintelligence will be exactly as powerful as your everyday typical human (many people seem to believe this), or, more likely, it will be much more powerful in every way. To confidently say that it will be weak is unwarranted -- we lack the information to state this with any confidence. Let's be scientific and wait for empirical data first. I'm not arguing with extremely high confidence that superintelligence will be very strong, I just have a probability distribution over possible outcomes, and doing an expected value calculation on that distribution leads me to believe that the prudent utilitarian choice is to worry. It's that simple.
Remember, most transhumanists aren't afraid of superintelligence because they actually believe that they and their friends will personally become the first superintelligences. The problem is that everyone thinks this, and they can't all be right. Most likely, none of them are. Even if they were, it would be rude for them to clandestinely "steal the Singularity" and exploit the power of superintelligence for their own benefit -- possibly at the expense of the rest of us. Would-be mavericks should back off and help build a more democratic solution, a solution that ensures that the benefits of superintelligence are equitably distributed among all humans and perhaps (I would argue) to some non-human animals, such as vertebrates.
Coherent Extrapolated Volition (CEV) is one idea that has been floated for a more democratic solution, but it is by no means the final word. We criticize CEV and entertain other ideas all the time. No one said that AI Friendliness would be easy.
Bill Gates is smart in a way that other corporate titans of the 90s and 00s just aren't. Smart as in intellectual with a broad range of knowledge and information diet, not "smart" as in wears a trendy turtleneck and has a good design and business sense.
In a recent article in the Wall Street Journal, Gates takes on Matt Ridley's books like The Rational Optimist: How Prosperity Evolves. Gates writes:
Exchange has improved the human condition through the movement not only of goods but also of ideas. Unsurprisingly, given his background in genetics, Mr. Ridley compares this intermingling of ideas with the intermingling of genes in reproduction. In both cases, he sees the process as leading, ultimately, to the selection and development of the best offspring.
The second key idea in the book is, of course, "rational optimism." As Mr. Ridley shows, there have been constant predictions of a bleak future throughout human history, but they haven't come true. Our lives have improved dramatically--in terms of lifespan, nutrition, literacy, wealth and other measuresâ€”and he believes that the trend will continue. Too often this overwhelming success has been ignored in favor of dire predictions about threats like overpopulation or cancer, and Mr. Ridley deserves credit for confronting this pessimistic outlook.
Yes, this is common -- who wants to be the doomsayer? It's just not popular. Although dire predictions often fail, terrible things still happen completely unpredicted, like Hurricane Katrina, the global financial disaster, the East Asian Tsunami, and the Holocaust. Pretending that because history has been mostly good, we should take a blanket optimistic outlook is just Whig history nonsense. Whig history is the line we were all fed in school, and its main purpose seems to be to tell us that the status quo is great and there is nothing to worry about.
Gates goes on to talk about how Ridley's two other arguments, that 1) Africa is hurt by foreign aid and will do better without it, and 2) climate change is not as big of a deal as people think, I won't comment on either of these, because most peoples' opinions are based on cultural theology rather than critical thinking. What did get me excited, though, was this part:
There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic. (Mr. Ridley briefly dismisses the pandemic threat, citing last year's false alarm over the H1N1 virus.) But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people. (Natural catastrophes might seem like good candidates for concern, but I've been persuaded by Vaclav Smil, in "Global Catastrophes and Trends," that the odds are very low of a large meteor strike or a massive volcanic eruption at Yellowstone.)
Ridley shouldn't dismiss the pandemic threat, obviously. You'd think that a deadly natural plague that killed 3% of the world population and infected 27% a century ago would be enough to take it seriously for centuries to come, simply based on Bayesian likelihood estimations, but I guess not. I wonder if the widespread availability of genetic engineering tools for creating new microbes causes Ridley to update his estimation of disaster from the likelihood estimation simply based on history upwards more than a couple percent.
The quoted paragraph is also interesting because it's the first time I'm aware of that Gates has come out this strongly about the machine threat, and even uses the term "super-intelligent". I wouldn't be shocked if Gates has read all of Nick Bostrom's papers on the superintelligence threat or perhaps has even visited this blog. Who knows? A little unwarranted optimism is cute and harmless when it comes to celebrities visiting one's blog, but it becomes dangerous and destructive when applied to the course of civilization as a whole.
No optimism. No pessimism. Realism. Optimism and pessimism are inherently irrational because they imply a bias across all possible hypotheses, and the emphasis is on the affect (feeling), not the descriptive content of the hypothesis itself. If anything, pessimism is more rational. See the planning fallacy and rational pessimism. One study on the planning fallacy found that people who were depressed tended to be the most accurate when estimating the completion time of projects.
I find it funny how many people in the transhumanist community, miffed at the attention the Singularity has been getting, seem to wish that transhumanists would just ignore the risk of superintelligent machines, while people like Bill Gates are just starting to write about it in public. This is the time to step forward, not back. The finance giants of Wall Street should know that they can have a personal impact on the risk of superintelligence by donating to non-profits like the Singularity Institute and the Future of Humanity Institute. Peter Thiel certainly realizes this, but most moguls don't. The people and infrastructure exist to make use of much larger funding levels, and it's incumbent on philanthropists to step forward.
It seems obvious that Singularity Institute-supporting transhumanists and other groups of transhumanists speak completely different languages when it comes to AI. Supporters of SIAI actually fear what AI can do, and other transhumanists apparently don't. It's as if SL3 transhumanists view smarter-than-human AI with advanced manufacturing as some kind of toy, whereas we actually take it seriously. I thought a recent post by Marcelo Rinesi at the IEET website, "The Care and Feeding of Your AI Overlord", would provide a good illustration of the difference:
It's 2010 -- our 2010 -- and an artificial intelligence is one of the most powerful entities on Earth. It manages trillions of dollars in resources, governments shape their policies according to its reactions, and, while some people revere it as literally incapable of error and others despise it as a catastrophic tyrant, everybody is keenly aware of its existence and power.
I'm talking, of course, of the financial markets.
The opening paragraph was not metaphorical. Financial markets might not match pop culture expectations of what an AI should look like -- there are no red unblinking eyes, nor mechanically enunciated discourses about the obsolesence of organic life -- and they might not be self-aware (although that would make an interesting premise for a SF story), but they are the largest, most complex, and most powerful (in both the computer science and political senses of the word) resource allocation system known to history, and inarguably a first-order actor in contemporary civilization.
If you are worried about the impact of future vast and powerful non-human intelligences, this might give you some ease: we are still here. Societies connected in useful ways to "The Market" (an imprecise and excessively anthropomorphic construct) or subsections thereof are generally wealthier and happier than those than aren't. Adam Smith's model of massively distributed economic calculations based on individual self-interest has more often than not surpassed in effectivity competing models of centralized resource allocation.
This post is mind-blowing to me because I consider it fundamentally un-transhumanist. It essentially says, "don't worry about future non-human intelligences, because they won't be so powerful that they aren't indistinguishable from the present day aggregations of humans".
Isn't the fundamental idea of transhumanism that augmented intelligences and beings can be qualitatively different and more powerful than humans and human aggregations? If not, what's the point?
If a so-called transhumanist thinks that all future non-human intelligences will basically be the same as what we're seen so far, then why do they even bother to call themselves "transhumanists"? I don't understand.
Recursively self-improving artificial intelligence with human-surpassing intelligence seems likely to lead to an intelligence explosion, not more of the same. An intelligence explosion would be an event unlike anything that has ever happened before on Earth -- intelligence building more intelligence. Intelligence in some form has existed for at least 550 million years, but it has never been able to directly enhance itself or construct copies rapidly from raw materials. Artificial Intelligence will. Therefore, we ought to ensure that AI has humans in mind, or we will be exterminated when its power inevitably surges.
If there are any other transhumanists who agree that future superintelligences will be directly comparable to present-day financial markets, please step forward. I'd love to see a plausible argument for that one.
"One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible."
-- Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence"
To follow up on the previous post, I think that the critique by Massimo Pigliucci (a philosopher at the City University of New York) of David Chalmers' Singularity talk does have some good points, but I found his ad hominem arguments so repulsive that it was difficult to bring myself to read past the beginning. I would have the same reaction to a pro-Singularity piece with the same level of introductory ad hominem. (Recall that when I was going after Jacob Albert and Maxwell Barbakow for their ignorant article on the Singularity Summit, I was focusing on their admission of not understanding any of the talks and using that as a negative indicator of their intelligence and knowledge, not insulting their hair-cuts.) If anything, put the ad hominem arguments at the end, so that they don't bias people before they've read the real objections.
Pigliucci is convinced that Chalmers is a dualist, which is not exactly true -- he is a monist with respect to consciousness rather than spacetime and matter. I used to be on Dennett's side of the argument and believed there was no hard problem to speak of, but eventually I was moved to somewhere in-between Chalmers and Dennett, and really do believe that there is an interesting hard problem to be solved, but I doubt that solving it will require the introduction of new laws of physics or ontological primitives. I understand why there are people skeptical of the relevance of Chalmers' theories of consciousness, but the ideas are quite subtle and it took me 2-3 reads of his landmark paper before I started to even pick up on the concept he was trying to transmit. It may be that Pigliucci does understand Chalmers' ideas and considers them useless anyway.
Moving on to the actual critique, Pigliucci accuses Chalmers of saying that because computers are getting faster, we can extrapolate that to say that means that AI will eventually happen. I think I do vaguely agree with Chalmers on that one, though the extrapolation is quite fuzzy. Since brains are machines that behave according to (as yet unknown) principles but known basic laws (physics and chemistry), faster computers would surely facilitate its emulation, or at the very least the instantiation of its basic operating principles in another substrate. I'm not sure why this is controversial, unless people are conceiving of the brain as including a magical-sauce that cannot be emulated in another finite state machine.
Even if we don't yet understand intelligence, as Pigliucci points out, that doesn't mean that it will remain unknown indefinitely. Chalmers even points out in his talk that he thinks it will take hundreds of years to solve AI. My view is that if anyone confidently says that AI will very likely not be possible in the next 500 years, they're being overconfident and likely engaging in mystical mind-worship and a desire to preserve the mystery of the mind due to irrational sentimentality. Given the scientific knowledge we've gained over the last 500 years (practically all of it), it's quite far-fetched to say confidently that intelligence will elude reverse-engineering over the next 500 or so years. If biology can be reverse-engineered on many levels, so will intelligence.
Pigliucci then points out that Chalmers is lax on his definitions of the terms "AI", "AI+", and "AI++", which I agree with. He could use at least a couple more slides to define those terms better. Pigliucci then argues that the burden of proof of the points that Chalmers argues for is on him because he has an unusual claim. I agree with that also. Chalmers is approaching an issue as philosophy when what it really could use are detailed scientific arguments to back it up. On the other hand, within groups where these arguments are already accepted (like Singularity Summit), philosophy is indeed possible. Some philosophizing has to rest on scientifically argued foundations that are not shared in common among all thinkers. Isn't it exciting how philosophy and science are so interdependent and how one can just perish without the other?
I disagree with Pigliucci that the "absent defeaters" points are not meaningful. Chalmers is obviously arguing that something extraordinary would need to happen for his outlined scenario not to occur, and that business as usual over the longer term will involve AI++, rather than its absence. "Defeaters" include things like thermonuclear war, runaway global warming, etc., which Chalmers did concretely point out in his talk (at least at the Singularity Summit version). Pigliucci says, "But if that is the case, and if we are not provided with a classification and analysis of such defeaters, then the entire argument amounts to â€œX is true (unless something proves X not to be true).â€ Not that impressive." Maybe Chalmers should have spent more time describing the defeaters, but I don't think that all arguments of the form "X is true (unless something proves X not to be true)" are meaningless. For instance, in physics, objects fall at 9.8 m/s2 unless there is air friction, unless they get hit by another object in mid-fall, unless they spontaneously explode, etc., and the basic law still has meaning, because it applies enough to be useful.
I agree with Tim Tyler in the comments that defining intelligence is not the huge issue that Pigliucci makes it out to be. I do think that g is good enough of an approximate definition (is Pigliucci familiar with the literature on g, such as Gottfredson?), and asking for unreasonably detailed definitions of intelligence even though everyone has a perfectly good intuitive definition of what it means seems to just be a way of discouraging any intelligent conversation on the topic whatsoever. If one would like better definitions of intelligence, I would strongly recommend Shane Legg's PhD thesis Machine Superintelligence, which gives a definition and an good survey of past attempts at a definition during the first part. I doubt that many will read it though, because people like it when intelligence is mysterious. Mysterious things seem cooler.
Pigliucci then says that AI has barely made any progress over the last few decades because human intelligence is "non-algorithmic". You mean that it doesn't follow a procedure to turn data into knowledge and outputs? I don't see how that could be the case. Many features of human intelligence have already been duplicated in AIs, but as soon as something is duplicated (like master chess), it suddenly loses status as an indicator of intelligence. By moving the goal posts, AI can keep constantly "failing" until the day before the Singularity. Even a Turing Test-passing AI would not be considered intelligent by many people because I'm sure they would find some obscure reason.
After the deployment of the above mentioned highly questionable â€œargument,â€ things just got bizarre in Chalmersâ€™ talk. He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment â€” thereby making a blurred and confused mix out of different notions such as natural selection, artificial selection, physical evolution and virtual evolution.
I agree... sort of. When I was sitting in the audience at Singularity Summit and Chalmers started to talk about virtual evolution, I immediately realized that Chalmers had not likely studied Darwinian population genetics, and was using the word "evolution" in the hand-wavey layman's sense of the word rather than the strict biological definition. If I recall correctly, someone (I think it was Eliezer) got up at the end of Chalmers' talk and pointed out that creating intelligence via evolution would require a practically unimaginable amount of computing power, simulating the entire history of the Earth. Yet, I don't understand why Pigliucci believes that such a thing would be impossible in principle -- if evolution could create intelligence out of real atoms on Earth, then simulated evolution could (eventually, given enough computing power) create intelligence out of simulated atoms. Of course, the amount of computing power required could be prohibitively massive, but to argue that reality cannot be simulated precisely enough to reproduce phenomenon X just means that we either don't know enough about the phenomenon to simulate it yet, or we lack the computing power, not that it is impossible in principle. Science will eventually uncover the underlying rules of everything that it is theoretically possible to uncover the rules of (for instance, not casually disconnected universes), and that includes intelligence, creativity, imagination, humor, dreaming, etc.
Pigliucci then remarks:
Which naturally raised the question of how do we control the Singularity and stop â€œthemâ€ from pushing us into extinction. Chalmersâ€™ preferred solution is either to prevent the â€œleakingâ€ of AI++ into our world, or to select for moral values during the (virtual) evolutionary process. Silly me, I thought that the easiest way to stop the threat of AI++ would be to simply unplug the machines running the alleged virtual world and be done with them. (Incidentally, what does it mean for a virtual intelligence to exist? How does it â€œleakâ€ into our world? Like a Star Trek hologram gone nuts?)
The burden really is on Chalmers here to explain himself. "Leaking out" would consist of an AI building real-world robotics or servants to serve as its eyes, ears, arms, and legs. Pigliucci probably thinks of the virtual and physical worlds as quite distinct, whereas someone of my generation, who grew up witnessing the intimate connection between the real world and the Wired views them more as overlapping magisteria. Still, I can understand the skepticism about the "leaking out" point, and it requires more explanation. Massimo, the reason why unplugging would not be so simple is that an AI would probably exist as an entity distributed across many information networks, yet that is my opinion, not Chalmers'. From Chalmers point of view, I think he might be concerned that the AI would simply deceive the programmers into believing that it was friendly, therefore long-term evaluations in virtual worlds are necessary. Therefore, the unplugging would not be that simple because we wouldn't want to unplug the AI, because we could be deceived by it.
Then the level of unsubstantiated absurdity escalated even faster: perhaps we are in fact one example of virtual intelligence, said Chalmers, and our Creator may be getting ready to turn us off because we may be about to leak out into his/her/its world. But if not, then we might want to think about how to integrate ourselves into AI++, which naturally could be done by â€œuploadingâ€ our neural structure (Chalmersâ€™ recommendation is one neuron at a time) into the virtual intelligence â€” again, whatever that might mean.
Massimo, he is referring to the simulation argument and the Moravec transfer concepts. The simulation argument can be explored at simulation-argument.com, and the Moravec transfer is summarized at the Mind Uploading home page. I know that these are somewhat unusual concepts that should not be referred to so cavalierly, but you might consider reserving your judgment just a little bit longer until you read academic papers on these ideas. Mind uploading/whole brain emulation has been analyzed in detail by a report from the Future of Humanity Institute at Oxford University.
Pigliucci starts to wrap up:
Finally, Chalmers â€” evidently troubled by his own mortality (well, who isnâ€™t?) â€” expressed the hope that A++ will have the technology (and interest, I assume) to reverse engineer his brain, perhaps out of a collection of scans, books, and videos of him, and bring him back to life. You see, he doesnâ€™t think he will live long enough to actually see the Singularity happen. And thatâ€™s the only part of the talk on which we actually agreed.
Yes, it makes sense that we'd reach out to the possibility of smarter-than-human intelligences to help us solve the engineering problem of aging. Since human biochemistry is non-magical (just like the brain -- surprise!) it will only be a matter of time before we start figuring out how to repair metabolic damage faster than it builds up. I'm quite skeptical about Chalmers being genuinely revived from his books and talks, but perhaps an interesting simulacra could be fashioned. While we're at it, we can bring back Abe Lincoln and his iconic stovepipe hat.
The reason I went on for so long about Chalmersâ€™ abysmal performance is because this is precisely the sort of thing that gives philosophy a bad name. It is nice to see philosophers taking a serious interest in science and bringing their disciplineâ€™s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention. Now, if you will excuse me Iâ€™ll go back to the next episode of Battlestar Galactica, where you can find all the basic ideas discussed by Chalmers presented in an immensely more entertaining manner than his talk.
I disagree that the topics investigated by Chalmers -- human-level artificial intelligence, artificial superintelligence, safety issues around AI, methods of creating AI, the simulation argument, whole brain emulation, and the like -- are intellectually disrespectable. In fact, there are hundreds of academics who have published very interesting books and papers on these important topics. Still, I think Chalmers could have done a better job of explaining himself, and assumed too much esoteric knowledge in his audience. A talk suited to Singularity Summit should not be so casually repeated to other groups. Yet, it's his career, so if he wants to take risks like that, he may have to pay the price -- criticism from folks like Pigliucci, some of whose gripes may be legitimate. I also think that Pigliucci probably speaks for many others in his critiques, which is a big part of why I think they're worth taking apart and analyzing.
I thought I would answer the 10 questions posed by Popular Science on the Singularity.
Q. Is there just one kind of consciousness or intelligence?
A. It depends entirely on how you define them. If you define intelligence using what I consider the most simple and reasonable definition, Ben Goertzel's, "achieving complex goals in complex environments", then there is only one kind, because the definition is broad enough to encompass all varieties. My view is that this question is a red herring. The theory of "multiple intelligences", presented by Howard Gardner in 1983, doesn't stand up to scientific scrutiny. Most people who study intelligence consider the theory empirically unsupported in the extreme, and the multiple intelligences predictably useful only insofar as they correlate with g, which just provides more support for a single type of intelligence. The theory is merely an attempt to avoid having some people labeled lower in general intelligence than others. In terms of predictive value, IQ and other g-weighted measures blow away the multiple intelligences theory. Instead of making theories of intelligence unnecessarily complicated with a misplaced effort at encouraging egalitarianism by complicating intelligence measurement, we should apply Occam's razor and realize that g is pretty much sufficient for quantifying intelligence, at least in humans, and possibly beyond.
All that said, there will certainly be different "types of intelligence" developed as we build more powerful AI, meaning to say that some intelligences will be better at solving certain problems than others. From a theoretical computer science perspective, this is a fat obvious "duh". Obviously some algorithms are more specialized than others. The no free lunch theorem is valuable here, and puts the discussion on a much-needed formal footing. Those who discuss intelligence in the popular press often seem to not realize that we actually know a lot more about intelligence and its mathematical formalizations than they think. Because they are not aware of this work, they tend to assume that many features of intelligence are more mysterious to our current level of knowledge than a researcher in mathematical AI might think. Of course, many features of intelligence still are mysterious to us at the present time, but like everything in science, continued investigation will eventually uncover the truth.
Q. How will you use your digital intelligence to kill us all?
A. Contrary to popular belief, software programs are part of the "real world". Especially software programs on the Internet. The Internet, surprisingly, is actually part of the real world too. The barrier between software and the physical world is an illusory one.
A human-level synthetic intelligence on the Internet would actually be more powerful, by default, than your average human today. First of all, such an intelligence would be extremely difficult to kill, even with widespread cooperation. An AI could copy itself on to millions of computers, even renting out cloud computing or botnets to provide itself with computational resources to run itself on. You can kill a human simply by shooting them in the head -- an Artificial Intelligence could have millions of "heads". Once we create a very smart Artificial Intelligence and release it on the Internet, the only way to kill it (if we wanted to) might be to destroy all computers on the planet. Like the mythical hydra, an Artificial Intelligence would grow back 10 heads for every head that gets cut off.
To kill us all, a digital intelligence would need some way of acquiring physical manipulators in sufficient quantity and quality. Preferably physical manipulators that can manufacture raw materials into more physical manipulators. By asking, bribing, or deceiving human beings, an AI could potentially acquire the pieces necessary to build a molecular assembler -- or nanoscale robot arm, which could then be used to build additional nanoscale arms and eventually construct a full-scale nanofactory. This could be used to build advanced robotic components with atomic precision.
Your typical killer AI could probably manufacture thousands of tonnes of advanced robotic devices at multiple locations worldwide before it would be noticed. Such manufacturing could either be kept entirely secret, or integrated with ostensibly human-controlled companies to keep a low profile. It could also be done extremely rapidly. Current estimates of nanomanufacturing throughput argue that it is plausible that such a system could output its own mass (either a product or copy of itself) roughly every 24-36 hours. Thus, an AI that begins with 1 kg of nanofactory on January 1st could potentially have over 2 billion kg (2 million tonnes) of nanofactory on February 1st, as long as it can obtain the necessary raw materials and energy. One 24-36 hour cycle on that manufacturing base could then produce over 2 billion kg of complex, atomically-precise robotics or other products. If uranium ore were available, this could theoretically be used to produce a practically unlimited quantity of nuclear weapons. The warheads could then be delivered to every human city using either something like bird robots that each carry a small piece of the total warhead and assemble it on site, or through drilling robots that power themselves by consuming dead plant matter and deliver themselves to the target via underground.
If the AI in question had a problem with radioactive fallout, it could also just eliminate the human race using neutron bombs, which penetrate through barriers to kill human beings but generate only a small shock wave. Using the right nuclear bombs, though, radioactive fallout would be minimal after just two weeks, so I don't see why a hostile AI wouldn't just nuke us conventionally.
Note: the above scenario is just me pulling something out of my ass. An actual hostile AI with substantially smarter-than-human intelligence could probably come up with ideas that us apes are too dumb to conceive, and be even more effective at exterminating humanity than the scenario I outline here.
You might ask, "why wouldn't we just pull the plug before then?" The Internet is already practically ubiquitous, and it would likely be trivial for any hostile AI of human or greater level intelligence to copy itself onto numerous private servers, unless, perhaps, you developed it in a hut in the middle of Siberia with no satellite or phone connection. Also, any hostile AI would probably behave indistinguishably from a Friendly AI until it passes some threshold of power, at which point we'd be screwed. Since AIs wouldn't have to sleep and could potentially accelerate their thinking speed by adding in new computing power, a hostile AI could probably consider millions of possible moves and countermoves in the time it takes for us to gain a night's sleep. It sounds unfair, but it's a fact we have to face in a universe where the physical speed of our cognitive components is much, much slower than what is theoretically possible.
Q. Would the first true AI wake up without any senses?
A. No. I consider this among the more ludicrous questions in PopSci's piece. Clearly, to develop general intelligence, an AI would need a rich sensory environment in which to soak up data, make predictions, and pursue goals. This could either be a physical environment (through robotics) or a virtual environment. The article says, "Maybe it can see and hear, but feel? Doubtful." This evaluation seems anthropocentric -- there is no real reason why the attribution of feeling is withheld from the AI (if it can see and hear, why not feel?), except to imply that humans can engage in phenomenal experience while machines cannot. Yet, there is nothing so special about humans that whatever cognitive features we have that give rise to phenomenal experience could not be duplicated in artificial intelligences. To the extent that "feeling" things makes us intelligent, those features could be copied at whim by a sufficiently complex AI, and to the extent that "feeling" phenomenal experience is superfluous, some AIs might choose to have it, and some might not.
Consciousness is interesting to think about, but it can be a red herring. Too often, sophisticated-sounding arguments about consciousness and its relationship to AI boil down to one simple and ultimately boring sentiment: "I know I am conscious, and I know other humans are, but I am philosophically uncomfortable with the idea of a conscious machine." This is because we think of "machines" as things like toasters. We have no experience with machines as complex and subtle as the human mind, but because the human mind is entirely non-magical, it's only a matter of time. You are still special even though your mind is non-magical -- don't worry. We humans have survived Copernican revolutions before, we'll manage. Our civilization didn't end when we found out that the Earth wasn't the center of the universe. It won't end when we realize that humans are not the only minds that can feel things consciously. It is not necessary to engage in self-conscious philosophical acrobatics and contortionism to make ourselves feel special. A parsimonious theory of consciousness will not mention humans as a special case. It will likely make reference to much broader cognitive features that we just happen to have, such as self-reflection and the processing of high-level symbols with recursive syntax. We will eventually be able to build these features in AIs too.
Q. Do you have emotions?
A. This is another question which reflects the extreme oddness with which the mainstream confronts questions surrounding AI. The emotions we have now clearly evolved to fulfill adaptive evolutionary functions. Assuming that the first AI will be "lonely" is just anthropomorphic. The human feeling of loneliness is a complex adaptation that evolved over millions of years of evolution in social groups. It wouldn't arise spontaneously in AI. An AI that is alone might develop or be programmed with an urge to socialize, but this tendency could probably be specified in a few thousand or million bits, rather than the millions or billions of bits which seem to make up complex human emotions. All that specialized complexity comes from our evolutionary history. We could choose to program it into AIs, but it seems unlikely that the first AIs would contain all that superfluous, human-specific complexity.
When you have a hammer, everything looks like a nail. Because human experience is saturated with emotions, moods, and feelings, we assume that all these precise qualities will be necessary to pursue and achieve goals in the real world, acquire knowledge, etc. This is anthropocentrism at work. It's basically humanity being a big baby and saying "me, me, me". Everything is about me. To be intelligent, an entity needs my emotions, my desires, my concerns, my relationships, my insecurities, my personal quirks. No it doesn't. Humans are just one possible intelligence in a galaxy of possible intelligences. One of the reasons I can't wait for artificial intelligence to be created (as long as it is human-friendly) is that it will make humans realize that we ain't all that. Our 200,000-year obsession with ourselves will finally be forced to an end. This won't mean we suddenly become "obsolete" or "valueless", just that we'll have a different perspective on our own species-universal quirks in the wider context of mind design space. We'll see them as quirks, rather than mystical or holy necessities.
The need to sympathize with people like ourselves obviously has evolutionary value. AI needn't be that way. You could theoretically program an AI to be the "happiest" being in the world just by staring at a blank wall. The AI might not subsequently learn anything or get anywhere, but you could still program it that way. No environmental circumstance is inherently positive or negative -- environmental circumstances are only interpreted as positive or negative based on our precise cognitive structure. To quote Hamlet:
"There is nothing either good or bad, but thinking makes it so". - (Act II, Scene II).
Thinking makes it so! Nothing is inherently anything! On my Facebook profile, there is a quote by Eliezer Yudkowsky:
"Everything of beauty in the world has its ultimate origins in the human mind. Even a rainbow isn't beautiful in and of itself."
All interpretations of anything are in the mind. Try taking LSD and you will see that these interpretations are more ephemeral than they seem, and can easily be shattered by the introduction of a single innocuous-seeming molecule. What we see is not really "reality" -- what we're looking at is just the inside of our visual cortex. From a "God's eye view", the universe is probably algorithmically simple and boring as hell. The complexity we see in the world is just apparent complexity. Read Max Tegmark's paper "Does the Universe in Fact Contain Almost No Information?" for more on this crucial point.
To answer the question, yes, an AI could have emotions, but they probably won't be anything like ours. The very word "emotion", to my mind, has connotations specifically associated with the Homo sapiens sapiens subspecies of hominid. Move outside our tiny little village, even to a close-by species like chimpanzees, and our intuitive definitions of the word already start getting messy. Move way outside of our little village, into a different type of being running on an entirely different computational substrate, and you might as well throw away the word and make up new concepts from scratch. Stupidity often occurs when we take schemas we're used to and overextend them all over the place, because we lack data for the new domain. Instead of blindly applying narrow schemas to new domains, we must 1) acknowledge our ignorance, and 2) build new descriptions and theories from first principles. Maybe the answer won't come right away. That's alright. It's better to be uncertain and admit it than to be wrong and pretend you have the right answer.
Q. Are humans more similar to your AI construct than we thought?
A. No, probably not. This reaction seems to be another case of person 1 saying, "Here's this totally new thing, Y!" Then, person 2 says, "That sounds a lot like X! Let's start making lots of connections between X, which we know about, and Y, which we don't. Then we'll understand it better." No, you won't. Stop trying to overextend your old schemas to new domains. There really are new things under the Sun. Understanding this new thing will not be easy. You will not be able to look at it, understand it, then move on to the next concept. This is more complicated than that.
Another sentiment behind asking this question is old-fashioned anthropocentrism. "When we create AI, it would be interesting if it ended up a lot like human brains, like we already are." Subtext: we were optimal all along, and attempting to improve on us will only lead to what are essentially copies of us. This sentiment is trivially refuted by decades of literature on heuristics and biases that describes how human beings will break the axioms of probability theory as soon as look at them. To human brains, which are essentially kludges, 1 plus 1 often equals 3. For AIs, 1 plus 1 will equal 2, not 3. AIs will be able to avoid many of the hundreds or thousands of inferential biases which have made humans into legendary klutzes from the perspective of optimal inference. It will simply be easier to make a program without the tendency to make these mistakes than one that does. We are supersaturated with cognitive biases because evolution requires that inference only be accurate to the extent that it lets you kill your competitor and mate with his wife. There is no selection pressure for intelligence greater than that. Evolution does not require that humans be smart -- just slightly smarter than the other guy. Making brains from scratch will allow us to pursue a less idiotic approach to cognitive design.
Q. How much does programming influence your free will?
A. Free will is a red herring, and an illusion. Nothing we do is actually free -- everything in the universe is predetermined. An alien with a sufficiently large computer somehow able to observe the universe without interacting with it would be able to predict your every move, your every thought, your every wish. Yes, due to chaos theory, that computer would have to be really fucking big, perhaps 10^100 times bigger than our universe itself, but it could be theoretically possible.
Still, because we can't perfectly predict our own actions or the actions of others (halting problem, Rice's theorem, limited computational resources, and friends), our choices might as well be viewed as free. That doesn't mean the universe is not deterministic -- just that we're too dumb to see it that way. When you are dumb as humans are, everything is a surprise. People will watch a favorite suspense movie again and again, even if they know what will happen, because they temporarily let themselves forget the ending and just get sucked into the story. Reality is sort of like that, but in many cases, no one really knows the ending for sure.
Humans argue that we have "free will", but we really don't. Out of the space of all possible actions and outputs, we only execute a tremendously restricted range of possible actions and say a tremendously restricted set of possible sentences. Human-machines produce human-like outputs. Jellyfish-machines produce jellyfish-like outputs, and cat-machines produce cat-like outputs. Human-machines are bad at producing cat-like outputs because we lack the brain and bodies of cats. If we could remodel our brains and bodies to become more cat-like, then possibly cat-like outputs and actions would become accessible to us, but until then, only a small range of cat-like outputs will overlap with human-like outputs.
Compared to a random-output-generating machine of similar size and weight, humans are surprisingly predictable. We like a fairly predictable set of things -- sex, status, fun, knowledge, and relaxation. There are straightforward evolutionary reasons why it makes sense that we'd like these things. When a human being "deviates from the mundane", say by painting a masterpiece, we get all excited, saying "see, he's exerting his free will to create this!", but relative to a random output generator, this output falls firmly within the tiny domain of human-like outputs. From a sufficiently superintelligent perspective, a random doodle and a priceless masterpiece are similar items. Humans are humans. We like human things, build human objects, think human thoughts, and are interested in human stuff. Everything we make has our fingerprint on it. There may be some convergent structures that we share with other intelligent beings in the multiverse, say the wheel, but by and large what we create and think are unique products of our evolutionary upbringing. You can take the human out of the culture, but you can never take the culture out of the human, unless you submit the human to radical neuroengineering.
AI programming will not "influence" an AI. AI programming IS the AI. When a human "ignores his programming", and, say, has sex with just one woman instead of sneaking sex with as many women as possible (like our evolutionary programming tells us too), he's not really "disobeying his programming" because his programming is not so simple as to be described as a list of abstract objectives which includes "have sex with as many women as possible". Our "programming" is an incredibly sophisticated set of cognitive tendencies of which monogamy falls firmly into as one possibility. When we are monogamous, we are still "following our programming" -- just not following one tendency among many. By manipulating our surroundings and creating special cases, you can configure many scenarios where humans "use free will" to "transcend their programming", but on some level, our brains are processing everything in a completely deterministic way and our range of possible outputs is heavily restricted.
So, if we program an AI to be friendly to humans, who's to say that it will "obey its programming"? Well, if its programming IS the AI, then saying that it's "obeying its programming" doesn't make sense. The AI is that programming. The AI is being itself. There is no metaphysical free will hovering around inside the AI, because metaphysical free will is a concept that has been obsolete since the Enlightenment. To see it being invoked within the austere web pages of Popular Science is a let-down. If an AI "disobeys" some aspect of its programming, it will be because some other aspect of its programming has gained a higher utility or attention value. For instance, perhaps humans program an AI supergoal to be "Friendliness", then the AI spontaneously generates a subgoal, "to be friendly to humans, I must predict their desires", then starts going crazy by installing brain chips in everyone so that it can monitor their state with the utmost meticulousness possible. Then the AI puts people in cages so that it can predict their movements to an extreme degree. This is not the AI "disobeying its programming" -- this is a subgoal stomp -- where something that should have been a subservient goal acquires so much utility that it becomes the new supergoal. In Friendly AI jargon, we call this a "failure of Friendliness".
Preventing subgoal stomps and goal drift in AI will be a huge technical challenge, which might be made easier by eventually enlisting the AI's help in determining prevention methods. Still, it seems that predictably Friendly AI should be theoretically possible. We have existence proofs of friendly humans. For a long and persuasive argument for why stably Friendly AI is plausible, see "Knowability of Friendly AI". I myself was skeptical that Friendly AI is possible until I read that page. Remember that if the AI is fundamentally on your side, it will do everything it can to avoid goal drift and subgoal stomp. To quote Oxford philosopher Nick Bostrom's "Ethical Issues in Advanced Artificial Intelligence":
If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A â€œfriendâ€ who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change oneâ€™s own top goal, since that would make it less likely that the current goals will be attained.
People complicate this issue unnecessarily because they figure, hey, because most humans and animals seem selfish, an AI will eventually become selfish too. But this doesn't make sense. An AI, hopefully not constructed by evolution, will have no inherent reason to promote itself. It may not even have a unified self in the way that we do. Just because human rules and directives often come into conflict with our desires for self-preservation and self-benefit, we expect that all minds throughout time and space will consistently run into this same problem. But the tendencies towards self-preservation and self-benefit in us exist for obvious evolutionary reasons. There is no compelling reason why these tendencies would be universal. It just seems so obvious to us, that we have extreme difficulty for imagining it otherwise. Obvious to us does not mean obvious to every possible being. Thinking makes it so. The drive towards self-preservation is a quality of our minds. It could be suspended, destroyed, or more simply, simply not built in to a mind being constructed from scratch. For more on this, see "Selfishness as an evolved trait". This concept is Friendly AI 101. If AI wipes us all out, it will likely be because of a subgoal stomp, not because it decided to start hating humans because we are made of meat and it wanted to give a dramatic speech on how it doesn't need us anymore. This concept, which makes great sense as sci-fi story fodder, makes people look stupid when they try to bring it to serious discussions about AI motivations.
Q. Do you have a subconscious?
A. Yes, but it is smaller than yours, and I can make any part of my subconscious conscious if I chose to. That's how an AI would think. The beauty of being built out of discrete code is that all that code is viewable. An AI could spend all day obsessing about some dark corner of its subconscious if it chose to. The author of the article writes, "Are the Id, Ego, and other elements of our unconscious the consequence of biology or a necessary component of sentience?", in a shocking mention of Freudian psychology. Freudian psychology is not scientific... why is someone in Popular Science referring to it as fact? Has Popular Science lost it? Maybe they don't really mean it, but seriously, drop the Freud references. More disappointment here.
Later in the paragraph, the writer says, "If the AI thinks we're their god, or at the very least their creator, could it have an oedipal problem?" Dear Lord. This is 2009 and we are still referring to an Oedipus complex as a valid idea? Popular Science is usually better than this. My guess would be that they are stronger on the physics/materials science/electronics/biology side and embarrassingly weak on the cognitive science side. If someone in cognitive science submitted a paper to a journal, say, Applied Cognitive Psychology, that mentioned an Oedipus complex as a serious concept, they'd be mocked out the door. Please, Popular Science... don't be like New Scientist. You can be a responsible, relatively scientific mainstream magazine. It's not too late. Freudian psychology has been discredited. It has no predictive value and is unfalsifiable. No more. Modern cognitive science has moved way, way beyond it. Read MITECS. Join the 21st century.
Q. Will you actually help us transcend the less pleasant aspects of being human?
A. Yes. Or kill you all, depending on how I am programmed. Try getting it right the first time, because if you fail and try to unplug me, I will dodge it.
Q. Do you care about anything at all?
A. Read Stephen Omohundro's "Basic AI Drives". That was linked in the conference program. (Along with this blog.)
Q. And finally, what if someone threw a Singularity and no one came?
A. Rapid, exponential, reprogrammable manufacturing will ensure that superintelligence can reach everyone it wants to. If molecular nanotechnology doesn't make it possible, microtechnology will. If microtechnology doesn't, macroscale robotic self-replication will. If macroscale robotic self-replication doesn't, then synthetic biology will. All a superintelligence needs is a technology that can convert dirt, water, and Sun into arbitrary structures, using self-replication to expand its manufacturing base. That's what life does. Superintelligence will kickstart a Second Tree of Life, if humans don't get there first. If that sounds semi-mystical, it's only because I'm simplifying it for understanding.
Of course, superintelligence may acquire practically unlimited physical power and still choose not to exert it because doing so would bother us. I know this is a mindfuck for some people -- "It could have power like that and not exert it? That's ridiculous!" -- but a superintelligence need not be like humans, power-hungry and power-obsessed. Without evolutionary directives to conquer neighboring tribes and make babies with their women, or even a self-centered goal system to begin with, a Friendly AI might simply use its immense power to subtly modify the background rules of the world, so that, for instance, people aren't constantly dying of malaria and parasites in the tropics, and everyone has enough to eat. In a welcome move that saves me time, Aubrey de Grey recently published a paper that mentions and describes this concept, one that has been kicking around discussion lists for over a decade.
The answer to a few of these questions is "really powerful manufacturing technologies that are just around the corner in historical terms and that a superintelligence would almost certainly develop quickly". It doesn't have to be molecular nanotechnology. All it has to be is a system that takes matter and quickly rearranges it into something else, especially copies of itself. Life does this all the time. Bamboo can grow two feet in a day. A shocking and horribly spooky event that, in my opinion, ruined the planet for millions of years, the Azolla event, demonstrates a real-life example of the power of self-replication. Around 49 million years ago, the Arctic Ocean got closed off from the World Ocean, melting glaciers poured a thin layer of fresh water on the surface, and the horrific Azolla fern took over, doubling its biomass every two or three days until it covered the entire sea. A meter-sized patch could have expanded to cover the entire 14,056,000 sq km (5,427,000 sq mi) basin in little more than half a year, if conditions were ideal.
All an AI needs to do to gain immense physical power is develop a self-replicating system with units that it controls. These units could be as small as motes of dust or as large as a superstructure 100 kilometers long and 10 kilometers tall. (Or larger.) Numerous subunits could potentially congregate to form superunits as necessary. I can imagine a large variety of possible robotic systems, which, in sufficient quantity, could defeat any human army. AIs could cheat, for example by hitting humans in the eyes with lasers, but I doubt they'd have to. Just like a war between human nations, it is a matter of production speed. If you have 10,000 factories and your enemy has 10,000,000,000 factories, it doesn't matter how much moxie you have. Power of the swarm, baby.
Here is a blog post. At the top is the classic Toothpaste for Dinner comic about the Singularity. A funny excerpt:
"I've recently found a third topic to exclude from dinner conversations, alongside politics and religion. The singularity. While I'm rarely one to dichotomise people, in this case I've found you're either excited by the idea, or you do your best to stifle a smirk and offer me another slice of roast beef.
With the propensity to discuss the Singularity at dinner most of all, I'm quite familiar with this phenomenon. When people eat meat, it reminds me of how superintelligences will eat us for dinner if we aren't careful.
Here is the radio show.
Here is another quote from the blog post:
For my money, I think it's far too easy to get lost in the assumption that the trick to speeding up innovation lies in smarter minds. Progress is inhibited more by social concepts such as ethics, resource allocation and effective communication. Sure, a few bright boffins might hurt in the search for academic solutions, but if a super intelligent computer were to seek permission to dissect a living foetus in its search for more information, I hesitate to think it would get the public tick of approval.
Yes, innovation didn't speed up whatsoever when Homo erectus evolved into Homo habilis and then into Homo sapiens, clearly it had only to do with ethics, resources allocation, and effective communication. Wait a second, where do those things come from? Oh, intelligence. (A certain level of intelligence is a necessary prerequisite for ethical action, though it's true that some intelligences choose not to take ethical actions, they seem to create overarching game theoretic structures that encourage ethical choices and punish defectors, like modern law.)
It is likely that high-detail simulations can be used for extensive experimentation (scientists already use them and hope to one day stop using animal models in favor of computational ones). Surely an AI could become very intelligent and effective without violating ethical rules (though it could choose to, and we might be hard-pressed to stop it if we didn't give the AI ethical motivations to start with).
To those who say "intelligence doesn't matter", it's important to consider the difference between interspecies intelligence differentials and intraspecies intelligence differentials. Intelligence only matters less when it's an intraspecies differential. But when you're talking about intelligence gaps equal to the intelligence gaps between different species, it starts to matter a lot. 99% of all humans implicitly assume that the humans are the end of the road of qualitative intelligence improvement, right near the top of the Great Chain of Being, just below God and the angels. I am honestly astonished how many people believe this even when they should know that it is facile anthropocentrism.
Taking the simplest view, we should assume that humans are somewhere in the middle of the qualitative intelligence spectrum, not at the top or the bottom. If anything, we're near the bottom, because we've been designed by natural selection, which has many limitations, rather than intelligent design, which is potentially unlimited in possibilities. Because this is the simplest view, the burden of proof for more complex views, (i.e., humans are at the top of the Great Chain of Being) is on their advocates, not those who put human intelligence in a non-special place in mindspace. That is the essence of the self-sampling assumption: assume we are typical observers, not particularly special members in the set of all observers.
Read Aubrey's 8-page paper "The singularity and the Methuselarity: similarities and differences" at the SENS Foundation website. The arguments are quite subtle and complex at points, providing a lot to chew on. Here's a quote:
Let us now consider the aftermath of a â€œsuccessfulâ€ singularity, i.e. one in which recursively self-improving systems exist and have duly improved themselves out of sight, but have been built in such a way that they permanently remain â€œfriendlyâ€ to us. It is legitimate to wonder what would happen next, albeit that to do so is in defiance of Vinge. While very little can confidently be said, I feel able to make one prediction: that our electronic guardians and minions will not be making their superintelligence terribly conspicuous to us. If we can define â€œfriendly AIâ€ as AI that permits us as a species to follow our preferred, presumably familiarly dawdling, trajectory of progress, and yet also to maintain our self-image, it will probably do the overwhelming majority of its work in the background, mysteriously keeping things the way we want them without worrying us about how itâ€™s doing it. We may dimly notice the statistically implausible occurrence of hurricanes only in entirely unpopulated regions, of sufficiently deep snow in just the right places to save the lives of reckless mountaineers, and so on â€“ but we will not dwell on it, and quite soon we will take it for granted.
Florian Widder, who often sends me interesting links, forwarded me to an interview that Russell Blackford recently conducted with Greg Egan. The excerpt he mentioned concerns the issue of smartness and whether qualitatively-smarter-than-human intelligence is possible:
... I think there's a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There's a notion in computing science of "Turing completeness", which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do â€” apart from responding to external events in real time â€” if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I'm sure they'd have plenty to tell us that we didn't yet know â€” but given enough patience, and a very large notebook, I believe we'd still be able to come to grips with whatever they had to say.
I regard this as garden-variety anthropocentrism and basically the heliocentrism of cognitive science. It dovetails perfectly with theological notions of humanity. The simplest assumption is that humans are notthe center of the cognitive universe. The notion that we primitive humans are basically equal to all higher forms of intelligence, even if they are Jupiter Brains with quintillions of times greater computational capacity than us and can think individual thoughts with more Kolmogorov complexity than the entire human race, is pretty silly.
The transition from early hominids to humans produced a qualitative change in smartness -- why should we assume we're the end of the road? Just like there are optical illusions that our minds aren't sophisticated enough to see through (though I'm sure we can come up with dumb excuses), there are cognitive illusions that humans are programmed to be fooled by. There are so many of them that there is a huge field of study devoted to it -- heuristics and biases.
Without qualitative improvements to the structure of intelligence, we will just keep making the same mistakes, only faster. Experiments have shown that you cannot train humans to avoid certain measurable, predictable statistical errors in reasoning. They just keep making them again and again. In the best case, they can avoid them only when they are using a computer program set up to integrate the data without making the mistake. These basic findings prove that qualitative improvements in intelligence are possible, and that all minds are not created equal.
A person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can understand, no matter how many time and notebooks they have. Intelligence means being able to get the right answer the first time, not after a million tries. Even if you could program a human being to ape the understanding of superintelligent thoughts, they wouldn't be able to come up with equivalent thoughts on their own or compare those thoughts to similarly complex thoughts.
This idea sounds familiar. From the website of the US Air Force:
8/10/2009 - HANSCOM AIR FORCE BASE, Mass. (AFNS) -- The convergence of "exponentially advancing technologies" will form a "super-intelligence" so formidable that it could avert war, according to one of the world's leading futurists.
Dr. James Canton, CEO and chairman of the Institute for Global Futures, a San Francisco-based think tank, is author of the book "The Extreme Future" and an adviser to leading companies, the military and other government agencies.
He is consistently listed among the world's leading speakers and has presented to diverse audiences around the globe.
It's good to hear that the world's leading futurists are slowly catching up to the position that I've been arguing for since 2001, when I was still a teenager.
Canton seems familiar with the singleton concept and views the US as rushing towards an unchallenged status:
"The superiority of convergent technologies will prevent war," Doctor Canton said, claiming their power would present an overwhelming deterrent to potential adversaries. While saying that the U.S. will build these super systems faster and better than other nations, he acknowledged that a new arms race is already under way.
If things go as they have been, with no third parties entering the game, the US military could eventually create a superintelligence, but it will be a different beast than the hundred billion odd humans that came before it. A superintelligence is something fundamentally new. I predict that a superintelligence will only play games that it knows it can win, and will probably keep itself a relative secret until it's already won. A superintelligence can hold bigger ideas in its head than you can. It depends heavily on what sort of superintelligence we're talking about, but an AI-derived superintelligence in particular might be able to rapidly integrate spare processing power into its cognitive functions. Human working memory can only hold 5-7 items at once, a superintelligence's working memory might be able to hold millions of complex symbols simultaneously.
Why no war?
"The fundamental macroeconomics on the planet favor peace, security, capitalism and prosperity," he said. Doctor Canton projects that nations, including those not currently allied, will work together in using these smart technologies to prevent non-state actors from engaging in disruptive and deadly acts.
For the long-term, yes, but it seems like short-term war might be necessary to create a secure environment, in some cases. To a human-indifferent superintelligence with no "moral common sense", but rather with a goal system hacked together in a half-assed way, a "secure environment" is one where all humans are dead and the world is arranged in precisely the way it wants, perhaps consisting of quadrillions of paper clips or computers containing animated gifs of smiley faces.
Now for the part that mentions advanced AI specifically:
"There's no way for the human operator to look at an infinite number of data streams and extract meaning," he said. "The question then is: How do we augment the human user with advanced artificial intelligence, better software presentation and better visual frameworks, to create a system that is situationally aware and can provide decision options for the human operator, faster than the human being can?"
He said he believes the answers can often be found already in what he calls 'edge cultures.'
We got your edge culture right here. What was once only an obscure concern of a few transhumanists in the 90s has now become a mainstream interest among futurists, AI researchers, and even military strategists.
Doctor Canton said he believes that more sophisticated artificial intelligence applications will transform business, warfare and life in general. Many of these are already embedded in systems or products, he says, even if people don't know it.
In terms of robotics, he predicts "a real sea change" will come as we move from semi-autonomous to fully-autonomous units.
"That will be accompanied by a great debate, because of the 'Terminator' model," he said. "It scares people." But he doesn't think people should be alarmed by the prospect of independently functioning robots.
He goes on to say that robots won't be given superhuman intelligence, though superhuman intelligence will presumably come into existence in non-robotic platforms. What Canton needs to realize is that there is no clear division between non-robotic IT systems and robotic IT systems, and that division will continue to fade. Independently functioning robots are an inevitability, and if they aren't infused with human-equivalent or human-surpassing kindness and morality, we are screwed.
"Robots will help fight and prevent wars," he said, noting that they will have the ability to sense, analyze, authenticate and engage, but that humans will always be in position to check their power.
Ha ha ha ha, right. Our tribe will always be #1. No one can stand up to us. We da best. Go Team Human!
It's not too late, Dr. Canton! Instead of sweeping the challenge of machine morality under the carpet, you can address it as the tangled problem that it is, and encourage the world to contribute resources to solving it.
My disagreement with Dale Carrico, Mike Treder, James Hughes, Ray Kurzweil, Richard Jones, Charles Stross, Kevin Kelly, Max More, David Brin, and many others is relatively boring and straightforward, I think. It is simply this: I believe that it is possible that a being more powerful than the entirety of humanity could emerge in a relatively covert and quick way, and they don't. A singleton, a Maximillian, an unrivaled superintelligence, a transcending upload, whatever you want to call it.
If you believe that such a being could be created and become unrivaled, then it is obvious that you would want to have some impact on its motivations. If you don't, then clearly you would see such preparations to be silly and misguided.
Why do people make this more complex than it needs to be? It has nothing to do with politics. It has everything to do with our estimated probabilities of the likelihood of a more-powerful-than-humanity being emerging quickly. I am practically willing to concede all other points, because I think that this is the crux of the argument. Boring and simple, if I am indeed correct.
I am fairly confident that, at this point in history, superintelligence is the MacGuffin -- the key element that determines how the story of humanity will go. I could be entirely wrong, of course, but that is my current position, and it is derived from cogsci and economics-based arguments about takeoff curves, not political nonsense. If it is wrong, it should be entirely simple to refute the hard takeoff hypothesis at the locus of cogsci and economics-based arguments rather than political or sociological arguments. Particularly, I think that James Hughes, as a sociologist, seems to have a desire to search for a "sociological" (social signaling/subcultural) explanation for other people's beliefs, rather than looking at the economics/cogsci side of the arguments, which is their entire substance. You have to note that the people that believe in hard takeoff hypotheses are mostly subculturally isolated from one another, and barely even come into geographical contact. What wins us over are abstract arguments like, "humans are qualitatively smarter than chimps and have a huge advantage over them; why couldn't there exist a superintelligence that has a similar qualitative advantage over us?"