I thought I would answer the 10 questions posed by Popular Science on the Singularity.
Q. Is there just one kind of consciousness or intelligence?
A. It depends entirely on how you define them. If you define intelligence using what I consider the most simple and reasonable definition, Ben Goertzel's, "achieving complex goals in complex environments", then there is only one kind, because the definition is broad enough to encompass all varieties. My view is that this question is a red herring. The theory of "multiple intelligences", presented by Howard Gardner in 1983, doesn't stand up to scientific scrutiny. Most people who study intelligence consider the theory empirically unsupported in the extreme, and the multiple intelligences predictably useful only insofar as they correlate with g, which just provides more support for a single type of intelligence. The theory is merely an attempt to avoid having some people labeled lower in general intelligence than others. In terms of predictive value, IQ and other g-weighted measures blow away the multiple intelligences theory. Instead of making theories of intelligence unnecessarily complicated with a misplaced effort at encouraging egalitarianism by complicating intelligence measurement, we should apply Occam's razor and realize that g is pretty much sufficient for quantifying intelligence, at least in humans, and possibly beyond.
All that said, there will certainly be different "types of intelligence" developed as we build more powerful AI, meaning to say that some intelligences will be better at solving certain problems than others. From a theoretical computer science perspective, this is a fat obvious "duh". Obviously some algorithms are more specialized than others. The no free lunch theorem is valuable here, and puts the discussion on a much-needed formal footing. Those who discuss intelligence in the popular press often seem to not realize that we actually know a lot more about intelligence and its mathematical formalizations than they think. Because they are not aware of this work, they tend to assume that many features of intelligence are more mysterious to our current level of knowledge than a researcher in mathematical AI might think. Of course, many features of intelligence still are mysterious to us at the present time, but like everything in science, continued investigation will eventually uncover the truth.
Q. How will you use your digital intelligence to kill us all?
A. Contrary to popular belief, software programs are part of the "real world". Especially software programs on the Internet. The Internet, surprisingly, is actually part of the real world too. The barrier between software and the physical world is an illusory one.
A human-level synthetic intelligence on the Internet would actually be more powerful, by default, than your average human today. First of all, such an intelligence would be extremely difficult to kill, even with widespread cooperation. An AI could copy itself on to millions of computers, even renting out cloud computing or botnets to provide itself with computational resources to run itself on. You can kill a human simply by shooting them in the head -- an Artificial Intelligence could have millions of "heads". Once we create a very smart Artificial Intelligence and release it on the Internet, the only way to kill it (if we wanted to) might be to destroy all computers on the planet. Like the mythical hydra, an Artificial Intelligence would grow back 10 heads for every head that gets cut off.
To kill us all, a digital intelligence would need some way of acquiring physical manipulators in sufficient quantity and quality. Preferably physical manipulators that can manufacture raw materials into more physical manipulators. By asking, bribing, or deceiving human beings, an AI could potentially acquire the pieces necessary to build a molecular assembler -- or nanoscale robot arm, which could then be used to build additional nanoscale arms and eventually construct a full-scale nanofactory. This could be used to build advanced robotic components with atomic precision.
Your typical killer AI could probably manufacture thousands of tonnes of advanced robotic devices at multiple locations worldwide before it would be noticed. Such manufacturing could either be kept entirely secret, or integrated with ostensibly human-controlled companies to keep a low profile. It could also be done extremely rapidly. Current estimates of nanomanufacturing throughput argue that it is plausible that such a system could output its own mass (either a product or copy of itself) roughly every 24-36 hours. Thus, an AI that begins with 1 kg of nanofactory on January 1st could potentially have over 2 billion kg (2 million tonnes) of nanofactory on February 1st, as long as it can obtain the necessary raw materials and energy. One 24-36 hour cycle on that manufacturing base could then produce over 2 billion kg of complex, atomically-precise robotics or other products. If uranium ore were available, this could theoretically be used to produce a practically unlimited quantity of nuclear weapons. The warheads could then be delivered to every human city using either something like bird robots that each carry a small piece of the total warhead and assemble it on site, or through drilling robots that power themselves by consuming dead plant matter and deliver themselves to the target via underground.
If the AI in question had a problem with radioactive fallout, it could also just eliminate the human race using neutron bombs, which penetrate through barriers to kill human beings but generate only a small shock wave. Using the right nuclear bombs, though, radioactive fallout would be minimal after just two weeks, so I don't see why a hostile AI wouldn't just nuke us conventionally.
Note: the above scenario is just me pulling something out of my ass. An actual hostile AI with substantially smarter-than-human intelligence could probably come up with ideas that us apes are too dumb to conceive, and be even more effective at exterminating humanity than the scenario I outline here.
You might ask, "why wouldn't we just pull the plug before then?" The Internet is already practically ubiquitous, and it would likely be trivial for any hostile AI of human or greater level intelligence to copy itself onto numerous private servers, unless, perhaps, you developed it in a hut in the middle of Siberia with no satellite or phone connection. Also, any hostile AI would probably behave indistinguishably from a Friendly AI until it passes some threshold of power, at which point we'd be screwed. Since AIs wouldn't have to sleep and could potentially accelerate their thinking speed by adding in new computing power, a hostile AI could probably consider millions of possible moves and countermoves in the time it takes for us to gain a night's sleep. It sounds unfair, but it's a fact we have to face in a universe where the physical speed of our cognitive components is much, much slower than what is theoretically possible.
Q. Would the first true AI wake up without any senses?
A. No. I consider this among the more ludicrous questions in PopSci's piece. Clearly, to develop general intelligence, an AI would need a rich sensory environment in which to soak up data, make predictions, and pursue goals. This could either be a physical environment (through robotics) or a virtual environment. The article says, "Maybe it can see and hear, but feel? Doubtful." This evaluation seems anthropocentric -- there is no real reason why the attribution of feeling is withheld from the AI (if it can see and hear, why not feel?), except to imply that humans can engage in phenomenal experience while machines cannot. Yet, there is nothing so special about humans that whatever cognitive features we have that give rise to phenomenal experience could not be duplicated in artificial intelligences. To the extent that "feeling" things makes us intelligent, those features could be copied at whim by a sufficiently complex AI, and to the extent that "feeling" phenomenal experience is superfluous, some AIs might choose to have it, and some might not.
Consciousness is interesting to think about, but it can be a red herring. Too often, sophisticated-sounding arguments about consciousness and its relationship to AI boil down to one simple and ultimately boring sentiment: "I know I am conscious, and I know other humans are, but I am philosophically uncomfortable with the idea of a conscious machine." This is because we think of "machines" as things like toasters. We have no experience with machines as complex and subtle as the human mind, but because the human mind is entirely non-magical, it's only a matter of time. You are still special even though your mind is non-magical -- don't worry. We humans have survived Copernican revolutions before, we'll manage. Our civilization didn't end when we found out that the Earth wasn't the center of the universe. It won't end when we realize that humans are not the only minds that can feel things consciously. It is not necessary to engage in self-conscious philosophical acrobatics and contortionism to make ourselves feel special. A parsimonious theory of consciousness will not mention humans as a special case. It will likely make reference to much broader cognitive features that we just happen to have, such as self-reflection and the processing of high-level symbols with recursive syntax. We will eventually be able to build these features in AIs too.
Q. Do you have emotions?
A. This is another question which reflects the extreme oddness with which the mainstream confronts questions surrounding AI. The emotions we have now clearly evolved to fulfill adaptive evolutionary functions. Assuming that the first AI will be "lonely" is just anthropomorphic. The human feeling of loneliness is a complex adaptation that evolved over millions of years of evolution in social groups. It wouldn't arise spontaneously in AI. An AI that is alone might develop or be programmed with an urge to socialize, but this tendency could probably be specified in a few thousand or million bits, rather than the millions or billions of bits which seem to make up complex human emotions. All that specialized complexity comes from our evolutionary history. We could choose to program it into AIs, but it seems unlikely that the first AIs would contain all that superfluous, human-specific complexity.
When you have a hammer, everything looks like a nail. Because human experience is saturated with emotions, moods, and feelings, we assume that all these precise qualities will be necessary to pursue and achieve goals in the real world, acquire knowledge, etc. This is anthropocentrism at work. It's basically humanity being a big baby and saying "me, me, me". Everything is about me. To be intelligent, an entity needs my emotions, my desires, my concerns, my relationships, my insecurities, my personal quirks. No it doesn't. Humans are just one possible intelligence in a galaxy of possible intelligences. One of the reasons I can't wait for artificial intelligence to be created (as long as it is human-friendly) is that it will make humans realize that we ain't all that. Our 200,000-year obsession with ourselves will finally be forced to an end. This won't mean we suddenly become "obsolete" or "valueless", just that we'll have a different perspective on our own species-universal quirks in the wider context of mind design space. We'll see them as quirks, rather than mystical or holy necessities.
The need to sympathize with people like ourselves obviously has evolutionary value. AI needn't be that way. You could theoretically program an AI to be the "happiest" being in the world just by staring at a blank wall. The AI might not subsequently learn anything or get anywhere, but you could still program it that way. No environmental circumstance is inherently positive or negative -- environmental circumstances are only interpreted as positive or negative based on our precise cognitive structure. To quote Hamlet:
"There is nothing either good or bad, but thinking makes it so". - (Act II, Scene II).
Thinking makes it so! Nothing is inherently anything! On my Facebook profile, there is a quote by Eliezer Yudkowsky:
"Everything of beauty in the world has its ultimate origins in the human mind. Even a rainbow isn't beautiful in and of itself."
All interpretations of anything are in the mind. Try taking LSD and you will see that these interpretations are more ephemeral than they seem, and can easily be shattered by the introduction of a single innocuous-seeming molecule. What we see is not really "reality" -- what we're looking at is just the inside of our visual cortex. From a "God's eye view", the universe is probably algorithmically simple and boring as hell. The complexity we see in the world is just apparent complexity. Read Max Tegmark's paper "Does the Universe in Fact Contain Almost No Information?" for more on this crucial point.
To answer the question, yes, an AI could have emotions, but they probably won't be anything like ours. The very word "emotion", to my mind, has connotations specifically associated with the Homo sapiens sapiens subspecies of hominid. Move outside our tiny little village, even to a close-by species like chimpanzees, and our intuitive definitions of the word already start getting messy. Move way outside of our little village, into a different type of being running on an entirely different computational substrate, and you might as well throw away the word and make up new concepts from scratch. Stupidity often occurs when we take schemas we're used to and overextend them all over the place, because we lack data for the new domain. Instead of blindly applying narrow schemas to new domains, we must 1) acknowledge our ignorance, and 2) build new descriptions and theories from first principles. Maybe the answer won't come right away. That's alright. It's better to be uncertain and admit it than to be wrong and pretend you have the right answer.
Q. Are humans more similar to your AI construct than we thought?
A. No, probably not. This reaction seems to be another case of person 1 saying, "Here's this totally new thing, Y!" Then, person 2 says, "That sounds a lot like X! Let's start making lots of connections between X, which we know about, and Y, which we don't. Then we'll understand it better." No, you won't. Stop trying to overextend your old schemas to new domains. There really are new things under the Sun. Understanding this new thing will not be easy. You will not be able to look at it, understand it, then move on to the next concept. This is more complicated than that.
Another sentiment behind asking this question is old-fashioned anthropocentrism. "When we create AI, it would be interesting if it ended up a lot like human brains, like we already are." Subtext: we were optimal all along, and attempting to improve on us will only lead to what are essentially copies of us. This sentiment is trivially refuted by decades of literature on heuristics and biases that describes how human beings will break the axioms of probability theory as soon as look at them. To human brains, which are essentially kludges, 1 plus 1 often equals 3. For AIs, 1 plus 1 will equal 2, not 3. AIs will be able to avoid many of the hundreds or thousands of inferential biases which have made humans into legendary klutzes from the perspective of optimal inference. It will simply be easier to make a program without the tendency to make these mistakes than one that does. We are supersaturated with cognitive biases because evolution requires that inference only be accurate to the extent that it lets you kill your competitor and mate with his wife. There is no selection pressure for intelligence greater than that. Evolution does not require that humans be smart -- just slightly smarter than the other guy. Making brains from scratch will allow us to pursue a less idiotic approach to cognitive design.
Q. How much does programming influence your free will?
A. Free will is a red herring, and an illusion. Nothing we do is actually free -- everything in the universe is predetermined. An alien with a sufficiently large computer somehow able to observe the universe without interacting with it would be able to predict your every move, your every thought, your every wish. Yes, due to chaos theory, that computer would have to be really fucking big, perhaps 10^100 times bigger than our universe itself, but it could be theoretically possible.
Still, because we can't perfectly predict our own actions or the actions of others (halting problem, Rice's theorem, limited computational resources, and friends), our choices might as well be viewed as free. That doesn't mean the universe is not deterministic -- just that we're too dumb to see it that way. When you are dumb as humans are, everything is a surprise. People will watch a favorite suspense movie again and again, even if they know what will happen, because they temporarily let themselves forget the ending and just get sucked into the story. Reality is sort of like that, but in many cases, no one really knows the ending for sure.
Humans argue that we have "free will", but we really don't. Out of the space of all possible actions and outputs, we only execute a tremendously restricted range of possible actions and say a tremendously restricted set of possible sentences. Human-machines produce human-like outputs. Jellyfish-machines produce jellyfish-like outputs, and cat-machines produce cat-like outputs. Human-machines are bad at producing cat-like outputs because we lack the brain and bodies of cats. If we could remodel our brains and bodies to become more cat-like, then possibly cat-like outputs and actions would become accessible to us, but until then, only a small range of cat-like outputs will overlap with human-like outputs.
Compared to a random-output-generating machine of similar size and weight, humans are surprisingly predictable. We like a fairly predictable set of things -- sex, status, fun, knowledge, and relaxation. There are straightforward evolutionary reasons why it makes sense that we'd like these things. When a human being "deviates from the mundane", say by painting a masterpiece, we get all excited, saying "see, he's exerting his free will to create this!", but relative to a random output generator, this output falls firmly within the tiny domain of human-like outputs. From a sufficiently superintelligent perspective, a random doodle and a priceless masterpiece are similar items. Humans are humans. We like human things, build human objects, think human thoughts, and are interested in human stuff. Everything we make has our fingerprint on it. There may be some convergent structures that we share with other intelligent beings in the multiverse, say the wheel, but by and large what we create and think are unique products of our evolutionary upbringing. You can take the human out of the culture, but you can never take the culture out of the human, unless you submit the human to radical neuroengineering.
AI programming will not "influence" an AI. AI programming IS the AI. When a human "ignores his programming", and, say, has sex with just one woman instead of sneaking sex with as many women as possible (like our evolutionary programming tells us too), he's not really "disobeying his programming" because his programming is not so simple as to be described as a list of abstract objectives which includes "have sex with as many women as possible". Our "programming" is an incredibly sophisticated set of cognitive tendencies of which monogamy falls firmly into as one possibility. When we are monogamous, we are still "following our programming" -- just not following one tendency among many. By manipulating our surroundings and creating special cases, you can configure many scenarios where humans "use free will" to "transcend their programming", but on some level, our brains are processing everything in a completely deterministic way and our range of possible outputs is heavily restricted.
So, if we program an AI to be friendly to humans, who's to say that it will "obey its programming"? Well, if its programming IS the AI, then saying that it's "obeying its programming" doesn't make sense. The AI is that programming. The AI is being itself. There is no metaphysical free will hovering around inside the AI, because metaphysical free will is a concept that has been obsolete since the Enlightenment. To see it being invoked within the austere web pages of Popular Science is a let-down. If an AI "disobeys" some aspect of its programming, it will be because some other aspect of its programming has gained a higher utility or attention value. For instance, perhaps humans program an AI supergoal to be "Friendliness", then the AI spontaneously generates a subgoal, "to be friendly to humans, I must predict their desires", then starts going crazy by installing brain chips in everyone so that it can monitor their state with the utmost meticulousness possible. Then the AI puts people in cages so that it can predict their movements to an extreme degree. This is not the AI "disobeying its programming" -- this is a subgoal stomp -- where something that should have been a subservient goal acquires so much utility that it becomes the new supergoal. In Friendly AI jargon, we call this a "failure of Friendliness".
Preventing subgoal stomps and goal drift in AI will be a huge technical challenge, which might be made easier by eventually enlisting the AI's help in determining prevention methods. Still, it seems that predictably Friendly AI should be theoretically possible. We have existence proofs of friendly humans. For a long and persuasive argument for why stably Friendly AI is plausible, see "Knowability of Friendly AI". I myself was skeptical that Friendly AI is possible until I read that page. Remember that if the AI is fundamentally on your side, it will do everything it can to avoid goal drift and subgoal stomp. To quote Oxford philosopher Nick Bostrom's "Ethical Issues in Advanced Artificial Intelligence":
If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A â€œfriendâ€ who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change oneâ€™s own top goal, since that would make it less likely that the current goals will be attained.
People complicate this issue unnecessarily because they figure, hey, because most humans and animals seem selfish, an AI will eventually become selfish too. But this doesn't make sense. An AI, hopefully not constructed by evolution, will have no inherent reason to promote itself. It may not even have a unified self in the way that we do. Just because human rules and directives often come into conflict with our desires for self-preservation and self-benefit, we expect that all minds throughout time and space will consistently run into this same problem. But the tendencies towards self-preservation and self-benefit in us exist for obvious evolutionary reasons. There is no compelling reason why these tendencies would be universal. It just seems so obvious to us, that we have extreme difficulty for imagining it otherwise. Obvious to us does not mean obvious to every possible being. Thinking makes it so. The drive towards self-preservation is a quality of our minds. It could be suspended, destroyed, or more simply, simply not built in to a mind being constructed from scratch. For more on this, see "Selfishness as an evolved trait". This concept is Friendly AI 101. If AI wipes us all out, it will likely be because of a subgoal stomp, not because it decided to start hating humans because we are made of meat and it wanted to give a dramatic speech on how it doesn't need us anymore. This concept, which makes great sense as sci-fi story fodder, makes people look stupid when they try to bring it to serious discussions about AI motivations.
Q. Do you have a subconscious?
A. Yes, but it is smaller than yours, and I can make any part of my subconscious conscious if I chose to. That's how an AI would think. The beauty of being built out of discrete code is that all that code is viewable. An AI could spend all day obsessing about some dark corner of its subconscious if it chose to. The author of the article writes, "Are the Id, Ego, and other elements of our unconscious the consequence of biology or a necessary component of sentience?", in a shocking mention of Freudian psychology. Freudian psychology is not scientific... why is someone in Popular Science referring to it as fact? Has Popular Science lost it? Maybe they don't really mean it, but seriously, drop the Freud references. More disappointment here.
Later in the paragraph, the writer says, "If the AI thinks we're their god, or at the very least their creator, could it have an oedipal problem?" Dear Lord. This is 2009 and we are still referring to an Oedipus complex as a valid idea? Popular Science is usually better than this. My guess would be that they are stronger on the physics/materials science/electronics/biology side and embarrassingly weak on the cognitive science side. If someone in cognitive science submitted a paper to a journal, say, Applied Cognitive Psychology, that mentioned an Oedipus complex as a serious concept, they'd be mocked out the door. Please, Popular Science... don't be like New Scientist. You can be a responsible, relatively scientific mainstream magazine. It's not too late. Freudian psychology has been discredited. It has no predictive value and is unfalsifiable. No more. Modern cognitive science has moved way, way beyond it. Read MITECS. Join the 21st century.
Q. Will you actually help us transcend the less pleasant aspects of being human?
A. Yes. Or kill you all, depending on how I am programmed. Try getting it right the first time, because if you fail and try to unplug me, I will dodge it.
Q. Do you care about anything at all?
A. Read Stephen Omohundro's "Basic AI Drives". That was linked in the conference program. (Along with this blog.)
Q. And finally, what if someone threw a Singularity and no one came?
A. Rapid, exponential, reprogrammable manufacturing will ensure that superintelligence can reach everyone it wants to. If molecular nanotechnology doesn't make it possible, microtechnology will. If microtechnology doesn't, macroscale robotic self-replication will. If macroscale robotic self-replication doesn't, then synthetic biology will. All a superintelligence needs is a technology that can convert dirt, water, and Sun into arbitrary structures, using self-replication to expand its manufacturing base. That's what life does. Superintelligence will kickstart a Second Tree of Life, if humans don't get there first. If that sounds semi-mystical, it's only because I'm simplifying it for understanding.
Of course, superintelligence may acquire practically unlimited physical power and still choose not to exert it because doing so would bother us. I know this is a mindfuck for some people -- "It could have power like that and not exert it? That's ridiculous!" -- but a superintelligence need not be like humans, power-hungry and power-obsessed. Without evolutionary directives to conquer neighboring tribes and make babies with their women, or even a self-centered goal system to begin with, a Friendly AI might simply use its immense power to subtly modify the background rules of the world, so that, for instance, people aren't constantly dying of malaria and parasites in the tropics, and everyone has enough to eat. In a welcome move that saves me time, Aubrey de Grey recently published a paper that mentions and describes this concept, one that has been kicking around discussion lists for over a decade.
The answer to a few of these questions is "really powerful manufacturing technologies that are just around the corner in historical terms and that a superintelligence would almost certainly develop quickly". It doesn't have to be molecular nanotechnology. All it has to be is a system that takes matter and quickly rearranges it into something else, especially copies of itself. Life does this all the time. Bamboo can grow two feet in a day. A shocking and horribly spooky event that, in my opinion, ruined the planet for millions of years, the Azolla event, demonstrates a real-life example of the power of self-replication. Around 49 million years ago, the Arctic Ocean got closed off from the World Ocean, melting glaciers poured a thin layer of fresh water on the surface, and the horrific Azolla fern took over, doubling its biomass every two or three days until it covered the entire sea. A meter-sized patch could have expanded to cover the entire 14,056,000 sq km (5,427,000 sq mi) basin in little more than half a year, if conditions were ideal.
All an AI needs to do to gain immense physical power is develop a self-replicating system with units that it controls. These units could be as small as motes of dust or as large as a superstructure 100 kilometers long and 10 kilometers tall. (Or larger.) Numerous subunits could potentially congregate to form superunits as necessary. I can imagine a large variety of possible robotic systems, which, in sufficient quantity, could defeat any human army. AIs could cheat, for example by hitting humans in the eyes with lasers, but I doubt they'd have to. Just like a war between human nations, it is a matter of production speed. If you have 10,000 factories and your enemy has 10,000,000,000 factories, it doesn't matter how much moxie you have. Power of the swarm, baby.