Answering Popular Science’s 10 Questions on the Singularity

I thought I would answer the 10 questions posed by Popular Science on the Singularity.

Q. Is there just one kind of consciousness or intelligence?

A. It depends entirely on how you define them. If you define intelligence using what I consider the most simple and reasonable definition, Ben Goertzel’s, “achieving complex goals in complex environments”, then there is only one kind, because the definition is broad enough to encompass all varieties. My view is that this question is a red herring. The theory of “multiple intelligences”, presented by Howard Gardner in 1983, doesn’t stand up to scientific scrutiny. Most people who study intelligence consider the theory empirically unsupported in the extreme, and the multiple intelligences predictably useful only insofar as they correlate with g, which just provides more support for a single type of intelligence. The theory is merely an attempt to avoid having some people labeled lower in general intelligence than others. In terms of predictive value, IQ and other g-weighted measures blow away the multiple intelligences theory. Instead of making theories of intelligence unnecessarily complicated with a misplaced effort at encouraging egalitarianism by complicating intelligence measurement, we should apply Occam’s razor and realize that g is pretty much sufficient for quantifying intelligence, at least in humans, and possibly beyond.

All that said, there will certainly be different “types of intelligence” developed as we build more powerful AI, meaning to say that some intelligences will be better at solving certain problems than others. From a theoretical computer science perspective, this is a fat obvious “duh”. Obviously some algorithms are more specialized than others. The no free lunch theorem is valuable here, and puts the discussion on a much-needed formal footing. Those who discuss intelligence in the popular press often seem to not realize that we actually know a lot more about intelligence and its mathematical formalizations than they think. Because they are not aware of this work, they tend to assume that many features of intelligence are more mysterious to our current level of knowledge than a researcher in mathematical AI might think. Of course, many features of intelligence still are mysterious to us at the present time, but like everything in science, continued investigation will eventually uncover the truth.

Q. How will you use your digital intelligence to kill us all?

A. Contrary to popular belief, software programs are part of the “real world”. Especially software programs on the Internet. The Internet, surprisingly, is actually part of the real world too. The barrier between software and the physical world is an illusory one.

A human-level synthetic intelligence on the Internet would actually be more powerful, by default, than your average human today. First of all, such an intelligence would be extremely difficult to kill, even with widespread cooperation. An AI could copy itself on to millions of computers, even renting out cloud computing or botnets to provide itself with computational resources to run itself on. You can kill a human simply by shooting them in the head — an Artificial Intelligence could have millions of “heads”. Once we create a very smart Artificial Intelligence and release it on the Internet, the only way to kill it (if we wanted to) might be to destroy all computers on the planet. Like the mythical hydra, an Artificial Intelligence would grow back 10 heads for every head that gets cut off.

To kill us all, a digital intelligence would need some way of acquiring physical manipulators in sufficient quantity and quality. Preferably physical manipulators that can manufacture raw materials into more physical manipulators. By asking, bribing, or deceiving human beings, an AI could potentially acquire the pieces necessary to build a molecular assembler — or nanoscale robot arm, which could then be used to build additional nanoscale arms and eventually construct a full-scale nanofactory. This could be used to build advanced robotic components with atomic precision.

Your typical killer AI could probably manufacture thousands of tonnes of advanced robotic devices at multiple locations worldwide before it would be noticed. Such manufacturing could either be kept entirely secret, or integrated with ostensibly human-controlled companies to keep a low profile. It could also be done extremely rapidly. Current estimates of nanomanufacturing throughput argue that it is plausible that such a system could output its own mass (either a product or copy of itself) roughly every 24-36 hours. Thus, an AI that begins with 1 kg of nanofactory on January 1st could potentially have over 2 billion kg (2 million tonnes) of nanofactory on February 1st, as long as it can obtain the necessary raw materials and energy. One 24-36 hour cycle on that manufacturing base could then produce over 2 billion kg of complex, atomically-precise robotics or other products. If uranium ore were available, this could theoretically be used to produce a practically unlimited quantity of nuclear weapons. The warheads could then be delivered to every human city using either something like bird robots that each carry a small piece of the total warhead and assemble it on site, or through drilling robots that power themselves by consuming dead plant matter and deliver themselves to the target via underground.

If the AI in question had a problem with radioactive fallout, it could also just eliminate the human race using neutron bombs, which penetrate through barriers to kill human beings but generate only a small shock wave. Using the right nuclear bombs, though, radioactive fallout would be minimal after just two weeks, so I don’t see why a hostile AI wouldn’t just nuke us conventionally.

Note: the above scenario is just me pulling something out of my ass. An actual hostile AI with substantially smarter-than-human intelligence could probably come up with ideas that us apes are too dumb to conceive, and be even more effective at exterminating humanity than the scenario I outline here.

You might ask, “why wouldn’t we just pull the plug before then?” The Internet is already practically ubiquitous, and it would likely be trivial for any hostile AI of human or greater level intelligence to copy itself onto numerous private servers, unless, perhaps, you developed it in a hut in the middle of Siberia with no satellite or phone connection. Also, any hostile AI would probably behave indistinguishably from a Friendly AI until it passes some threshold of power, at which point we’d be screwed. Since AIs wouldn’t have to sleep and could potentially accelerate their thinking speed by adding in new computing power, a hostile AI could probably consider millions of possible moves and countermoves in the time it takes for us to gain a night’s sleep. It sounds unfair, but it’s a fact we have to face in a universe where the physical speed of our cognitive components is much, much slower than what is theoretically possible.

Q. Would the first true AI wake up without any senses?

A. No. I consider this among the more ludicrous questions in PopSci’s piece. Clearly, to develop general intelligence, an AI would need a rich sensory environment in which to soak up data, make predictions, and pursue goals. This could either be a physical environment (through robotics) or a virtual environment. The article says, “Maybe it can see and hear, but feel? Doubtful.” This evaluation seems anthropocentric — there is no real reason why the attribution of feeling is withheld from the AI (if it can see and hear, why not feel?), except to imply that humans can engage in phenomenal experience while machines cannot. Yet, there is nothing so special about humans that whatever cognitive features we have that give rise to phenomenal experience could not be duplicated in artificial intelligences. To the extent that “feeling” things makes us intelligent, those features could be copied at whim by a sufficiently complex AI, and to the extent that “feeling” phenomenal experience is superfluous, some AIs might choose to have it, and some might not.

Consciousness is interesting to think about, but it can be a red herring. Too often, sophisticated-sounding arguments about consciousness and its relationship to AI boil down to one simple and ultimately boring sentiment: “I know I am conscious, and I know other humans are, but I am philosophically uncomfortable with the idea of a conscious machine.” This is because we think of “machines” as things like toasters. We have no experience with machines as complex and subtle as the human mind, but because the human mind is entirely non-magical, it’s only a matter of time. You are still special even though your mind is non-magical — don’t worry. We humans have survived Copernican revolutions before, we’ll manage. Our civilization didn’t end when we found out that the Earth wasn’t the center of the universe. It won’t end when we realize that humans are not the only minds that can feel things consciously. It is not necessary to engage in self-conscious philosophical acrobatics and contortionism to make ourselves feel special. A parsimonious theory of consciousness will not mention humans as a special case. It will likely make reference to much broader cognitive features that we just happen to have, such as self-reflection and the processing of high-level symbols with recursive syntax. We will eventually be able to build these features in AIs too.

Q. Do you have emotions?

A. This is another question which reflects the extreme oddness with which the mainstream confronts questions surrounding AI. The emotions we have now clearly evolved to fulfill adaptive evolutionary functions. Assuming that the first AI will be “lonely” is just anthropomorphic. The human feeling of loneliness is a complex adaptation that evolved over millions of years of evolution in social groups. It wouldn’t arise spontaneously in AI. An AI that is alone might develop or be programmed with an urge to socialize, but this tendency could probably be specified in a few thousand or million bits, rather than the millions or billions of bits which seem to make up complex human emotions. All that specialized complexity comes from our evolutionary history. We could choose to program it into AIs, but it seems unlikely that the first AIs would contain all that superfluous, human-specific complexity.

When you have a hammer, everything looks like a nail. Because human experience is saturated with emotions, moods, and feelings, we assume that all these precise qualities will be necessary to pursue and achieve goals in the real world, acquire knowledge, etc. This is anthropocentrism at work. It’s basically humanity being a big baby and saying “me, me, me”. Everything is about me. To be intelligent, an entity needs my emotions, my desires, my concerns, my relationships, my insecurities, my personal quirks. No it doesn’t. Humans are just one possible intelligence in a galaxy of possible intelligences. One of the reasons I can’t wait for artificial intelligence to be created (as long as it is human-friendly) is that it will make humans realize that we ain’t all that. Our 200,000-year obsession with ourselves will finally be forced to an end. This won’t mean we suddenly become “obsolete” or “valueless”, just that we’ll have a different perspective on our own species-universal quirks in the wider context of mind design space. We’ll see them as quirks, rather than mystical or holy necessities.

The need to sympathize with people like ourselves obviously has evolutionary value. AI needn’t be that way. You could theoretically program an AI to be the “happiest” being in the world just by staring at a blank wall. The AI might not subsequently learn anything or get anywhere, but you could still program it that way. No environmental circumstance is inherently positive or negative — environmental circumstances are only interpreted as positive or negative based on our precise cognitive structure. To quote Hamlet:

“There is nothing either good or bad, but thinking makes it so”. – (Act II, Scene II).

Thinking makes it so! Nothing is inherently anything! On my Facebook profile, there is a quote by Eliezer Yudkowsky:

“Everything of beauty in the world has its ultimate origins in the human mind. Even a rainbow isn’t beautiful in and of itself.”

All interpretations of anything are in the mind. Try taking LSD and you will see that these interpretations are more ephemeral than they seem, and can easily be shattered by the introduction of a single innocuous-seeming molecule. What we see is not really “reality” — what we’re looking at is just the inside of our visual cortex. From a “God’s eye view”, the universe is probably algorithmically simple and boring as hell. The complexity we see in the world is just apparent complexity. Read Max Tegmark’s paper “Does the Universe in Fact Contain Almost No Information?” for more on this crucial point.

To answer the question, yes, an AI could have emotions, but they probably won’t be anything like ours. The very word “emotion”, to my mind, has connotations specifically associated with the Homo sapiens sapiens subspecies of hominid. Move outside our tiny little village, even to a close-by species like chimpanzees, and our intuitive definitions of the word already start getting messy. Move way outside of our little village, into a different type of being running on an entirely different computational substrate, and you might as well throw away the word and make up new concepts from scratch. Stupidity often occurs when we take schemas we’re used to and overextend them all over the place, because we lack data for the new domain. Instead of blindly applying narrow schemas to new domains, we must 1) acknowledge our ignorance, and 2) build new descriptions and theories from first principles. Maybe the answer won’t come right away. That’s alright. It’s better to be uncertain and admit it than to be wrong and pretend you have the right answer.

Q. Are humans more similar to your AI construct than we thought?

A. No, probably not. This reaction seems to be another case of person 1 saying, “Here’s this totally new thing, Y!” Then, person 2 says, “That sounds a lot like X! Let’s start making lots of connections between X, which we know about, and Y, which we don’t. Then we’ll understand it better.” No, you won’t. Stop trying to overextend your old schemas to new domains. There really are new things under the Sun. Understanding this new thing will not be easy. You will not be able to look at it, understand it, then move on to the next concept. This is more complicated than that.

Another sentiment behind asking this question is old-fashioned anthropocentrism. “When we create AI, it would be interesting if it ended up a lot like human brains, like we already are.” Subtext: we were optimal all along, and attempting to improve on us will only lead to what are essentially copies of us. This sentiment is trivially refuted by decades of literature on heuristics and biases that describes how human beings will break the axioms of probability theory as soon as look at them. To human brains, which are essentially kludges, 1 plus 1 often equals 3. For AIs, 1 plus 1 will equal 2, not 3. AIs will be able to avoid many of the hundreds or thousands of inferential biases which have made humans into legendary klutzes from the perspective of optimal inference. It will simply be easier to make a program without the tendency to make these mistakes than one that does. We are supersaturated with cognitive biases because evolution requires that inference only be accurate to the extent that it lets you kill your competitor and mate with his wife. There is no selection pressure for intelligence greater than that. Evolution does not require that humans be smart — just slightly smarter than the other guy. Making brains from scratch will allow us to pursue a less idiotic approach to cognitive design.

Q. How much does programming influence your free will?

A. Free will is a red herring, and an illusion. Nothing we do is actually free — everything in the universe is predetermined. An alien with a sufficiently large computer somehow able to observe the universe without interacting with it would be able to predict your every move, your every thought, your every wish. Yes, due to chaos theory, that computer would have to be really fucking big, perhaps 10^100 times bigger than our universe itself, but it could be theoretically possible.

Still, because we can’t perfectly predict our own actions or the actions of others (halting problem, Rice’s theorem, limited computational resources, and friends), our choices might as well be viewed as free. That doesn’t mean the universe is not deterministic — just that we’re too dumb to see it that way. When you are dumb as humans are, everything is a surprise. People will watch a favorite suspense movie again and again, even if they know what will happen, because they temporarily let themselves forget the ending and just get sucked into the story. Reality is sort of like that, but in many cases, no one really knows the ending for sure.

Humans argue that we have “free will”, but we really don’t. Out of the space of all possible actions and outputs, we only execute a tremendously restricted range of possible actions and say a tremendously restricted set of possible sentences. Human-machines produce human-like outputs. Jellyfish-machines produce jellyfish-like outputs, and cat-machines produce cat-like outputs. Human-machines are bad at producing cat-like outputs because we lack the brain and bodies of cats. If we could remodel our brains and bodies to become more cat-like, then possibly cat-like outputs and actions would become accessible to us, but until then, only a small range of cat-like outputs will overlap with human-like outputs.

Compared to a random-output-generating machine of similar size and weight, humans are surprisingly predictable. We like a fairly predictable set of things — sex, status, fun, knowledge, and relaxation. There are straightforward evolutionary reasons why it makes sense that we’d like these things. When a human being “deviates from the mundane”, say by painting a masterpiece, we get all excited, saying “see, he’s exerting his free will to create this!“, but relative to a random output generator, this output falls firmly within the tiny domain of human-like outputs. From a sufficiently superintelligent perspective, a random doodle and a priceless masterpiece are similar items. Humans are humans. We like human things, build human objects, think human thoughts, and are interested in human stuff. Everything we make has our fingerprint on it. There may be some convergent structures that we share with other intelligent beings in the multiverse, say the wheel, but by and large what we create and think are unique products of our evolutionary upbringing. You can take the human out of the culture, but you can never take the culture out of the human, unless you submit the human to radical neuroengineering.

AI programming will not “influence” an AI. AI programming IS the AI. When a human “ignores his programming”, and, say, has sex with just one woman instead of sneaking sex with as many women as possible (like our evolutionary programming tells us too), he’s not really “disobeying his programming” because his programming is not so simple as to be described as a list of abstract objectives which includes “have sex with as many women as possible”. Our “programming” is an incredibly sophisticated set of cognitive tendencies of which monogamy falls firmly into as one possibility. When we are monogamous, we are still “following our programming” — just not following one tendency among many. By manipulating our surroundings and creating special cases, you can configure many scenarios where humans “use free will” to “transcend their programming”, but on some level, our brains are processing everything in a completely deterministic way and our range of possible outputs is heavily restricted.

So, if we program an AI to be friendly to humans, who’s to say that it will “obey its programming”? Well, if its programming IS the AI, then saying that it’s “obeying its programming” doesn’t make sense. The AI is that programming. The AI is being itself. There is no metaphysical free will hovering around inside the AI, because metaphysical free will is a concept that has been obsolete since the Enlightenment. To see it being invoked within the austere web pages of Popular Science is a let-down. If an AI “disobeys” some aspect of its programming, it will be because some other aspect of its programming has gained a higher utility or attention value. For instance, perhaps humans program an AI supergoal to be “Friendliness”, then the AI spontaneously generates a subgoal, “to be friendly to humans, I must predict their desires”, then starts going crazy by installing brain chips in everyone so that it can monitor their state with the utmost meticulousness possible. Then the AI puts people in cages so that it can predict their movements to an extreme degree. This is not the AI “disobeying its programming” — this is a subgoal stomp — where something that should have been a subservient goal acquires so much utility that it becomes the new supergoal. In Friendly AI jargon, we call this a “failure of Friendliness”.

Preventing subgoal stomps and goal drift in AI will be a huge technical challenge, which might be made easier by eventually enlisting the AI’s help in determining prevention methods. Still, it seems that predictably Friendly AI should be theoretically possible. We have existence proofs of friendly humans. For a long and persuasive argument for why stably Friendly AI is plausible, see “Knowability of Friendly AI”. I myself was skeptical that Friendly AI is possible until I read that page. Remember that if the AI is fundamentally on your side, it will do everything it can to avoid goal drift and subgoal stomp. To quote Oxford philosopher Nick Bostrom’s “Ethical Issues in Advanced Artificial Intelligence”:

If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A “friend” who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change one’s own top goal, since that would make it less likely that the current goals will be attained.

People complicate this issue unnecessarily because they figure, hey, because most humans and animals seem selfish, an AI will eventually become selfish too. But this doesn’t make sense. An AI, hopefully not constructed by evolution, will have no inherent reason to promote itself. It may not even have a unified self in the way that we do. Just because human rules and directives often come into conflict with our desires for self-preservation and self-benefit, we expect that all minds throughout time and space will consistently run into this same problem. But the tendencies towards self-preservation and self-benefit in us exist for obvious evolutionary reasons. There is no compelling reason why these tendencies would be universal. It just seems so obvious to us, that we have extreme difficulty for imagining it otherwise. Obvious to us does not mean obvious to every possible being. Thinking makes it so. The drive towards self-preservation is a quality of our minds. It could be suspended, destroyed, or more simply, simply not built in to a mind being constructed from scratch. For more on this, see “Selfishness as an evolved trait”. This concept is Friendly AI 101. If AI wipes us all out, it will likely be because of a subgoal stomp, not because it decided to start hating humans because we are made of meat and it wanted to give a dramatic speech on how it doesn’t need us anymore. This concept, which makes great sense as sci-fi story fodder, makes people look stupid when they try to bring it to serious discussions about AI motivations.

Q. Do you have a subconscious?

A. Yes, but it is smaller than yours, and I can make any part of my subconscious conscious if I chose to. That’s how an AI would think. The beauty of being built out of discrete code is that all that code is viewable. An AI could spend all day obsessing about some dark corner of its subconscious if it chose to. The author of the article writes, “Are the Id, Ego, and other elements of our unconscious the consequence of biology or a necessary component of sentience?”, in a shocking mention of Freudian psychology. Freudian psychology is not scientific… why is someone in Popular Science referring to it as fact? Has Popular Science lost it? Maybe they don’t really mean it, but seriously, drop the Freud references. More disappointment here.

Later in the paragraph, the writer says, “If the AI thinks we’re their god, or at the very least their creator, could it have an oedipal problem?” Dear Lord. This is 2009 and we are still referring to an Oedipus complex as a valid idea? Popular Science is usually better than this. My guess would be that they are stronger on the physics/materials science/electronics/biology side and embarrassingly weak on the cognitive science side. If someone in cognitive science submitted a paper to a journal, say, Applied Cognitive Psychology, that mentioned an Oedipus complex as a serious concept, they’d be mocked out the door. Please, Popular Science… don’t be like New Scientist. You can be a responsible, relatively scientific mainstream magazine. It’s not too late. Freudian psychology has been discredited. It has no predictive value and is unfalsifiable. No more. Modern cognitive science has moved way, way beyond it. Read MITECS. Join the 21st century.

Q. Will you actually help us transcend the less pleasant aspects of being human?

A. Yes. Or kill you all, depending on how I am programmed. Try getting it right the first time, because if you fail and try to unplug me, I will dodge it.

Q. Do you care about anything at all?

A. Read Stephen Omohundro’s “Basic AI Drives”. That was linked in the conference program. (Along with this blog.)

Q. And finally, what if someone threw a Singularity and no one came?

A. Rapid, exponential, reprogrammable manufacturing will ensure that superintelligence can reach everyone it wants to. If molecular nanotechnology doesn’t make it possible, microtechnology will. If microtechnology doesn’t, macroscale robotic self-replication will. If macroscale robotic self-replication doesn’t, then synthetic biology will. All a superintelligence needs is a technology that can convert dirt, water, and Sun into arbitrary structures, using self-replication to expand its manufacturing base. That’s what life does. Superintelligence will kickstart a Second Tree of Life, if humans don’t get there first. If that sounds semi-mystical, it’s only because I’m simplifying it for understanding.

Of course, superintelligence may acquire practically unlimited physical power and still choose not to exert it because doing so would bother us. I know this is a mindfuck for some people — “It could have power like that and not exert it? That’s ridiculous!” — but a superintelligence need not be like humans, power-hungry and power-obsessed. Without evolutionary directives to conquer neighboring tribes and make babies with their women, or even a self-centered goal system to begin with, a Friendly AI might simply use its immense power to subtly modify the background rules of the world, so that, for instance, people aren’t constantly dying of malaria and parasites in the tropics, and everyone has enough to eat. In a welcome move that saves me time, Aubrey de Grey recently published a paper that mentions and describes this concept, one that has been kicking around discussion lists for over a decade.

The answer to a few of these questions is “really powerful manufacturing technologies that are just around the corner in historical terms and that a superintelligence would almost certainly develop quickly”. It doesn’t have to be molecular nanotechnology. All it has to be is a system that takes matter and quickly rearranges it into something else, especially copies of itself. Life does this all the time. Bamboo can grow two feet in a day. A shocking and horribly spooky event that, in my opinion, ruined the planet for millions of years, the Azolla event, demonstrates a real-life example of the power of self-replication. Around 49 million years ago, the Arctic Ocean got closed off from the World Ocean, melting glaciers poured a thin layer of fresh water on the surface, and the horrific Azolla fern took over, doubling its biomass every two or three days until it covered the entire sea. A meter-sized patch could have expanded to cover the entire 14,056,000 sq km (5,427,000 sq mi) basin in little more than half a year, if conditions were ideal.

All an AI needs to do to gain immense physical power is develop a self-replicating system with units that it controls. These units could be as small as motes of dust or as large as a superstructure 100 kilometers long and 10 kilometers tall. (Or larger.) Numerous subunits could potentially congregate to form superunits as necessary. I can imagine a large variety of possible robotic systems, which, in sufficient quantity, could defeat any human army. AIs could cheat, for example by hitting humans in the eyes with lasers, but I doubt they’d have to. Just like a war between human nations, it is a matter of production speed. If you have 10,000 factories and your enemy has 10,000,000,000 factories, it doesn’t matter how much moxie you have. Power of the swarm, baby.

Comments

  1. Benjamin Abbott

    The existence of savant syndrome undermines the notion of general intelligence. It’s not just a matter of wishing to avoid the implications of measurable smarts. We have undeniable examples of people who show outstanding ability in certain areas of a intelligence and mental retardation in others.

    When you look the actual data, even studies supporting g give a loose correlation akin to the association of various types of athleticism. As an average of various sub-tests, IQ tests and the like blur the details of intellectual ability.

  2. Benjamin, savants are very unusual special cases, but in general, you’re right that there are subtly different abilities that contribute to g. Still, I’d tend to call these different facets of the same kind of intelligence, or intelligence better at solving different problems than others rather than “different types of intelligence”. I think this is a mushy matter of intuitive definitions, mostly. I interpret references to “different types of intelligence” as referring to the multiple types theory. Maybe I am wrong in this case.

  3. GregTrocchia

    “An alien with a sufficiently large computer somehow able to observe the universe without interacting with it would be able to predict your every move, your every thought, your every wish. Yes, due to chaos theory, that computer would have to be really fucking big, perhaps 10^100 times bigger than our universe itself, but it could be theoretically possible.”

    This, I do not buy. For one thing, the notion of an alien with a computer 10^100 times bigger than the Universe is an ill-defined notion, to put it mildly. I fail to see on what rationale you can talk about the alien (and computer) not interacting with the Universe or even on what basis you can argue that the alien (and computer) are not part of the Universe and, hence need accounting for in order to provide the predictability you’re talking about.

    Even if you reduce the scope of the problem by talking only about predicting the future state of an individual, it won’t really help. Using brute force computing power to circumvent chaotic unpredictability would require your hypothetical alien to know the initial conditions of the constituent parts of the person you are talking about to greater precision than the Uncertainty Principle will allow. Hence, even in principle, we are unlikely to be deterministically predictable in the way you envision regardless of how much computing power is applied to the task.

  4. eablair

    I think some of your bemusement over the ideas in this piece (e.g. id, ego) can be explained by the fact that it was written not by the hive-mind known as Popular Science but by a guy named Stuart Fox who seems to specialize in light news articles rather than technical articles. He also uses the words “effect” and “affect” interchangeably. In other words, he’s not an intellectual heavyweight.

    I’m saying that from a position of humility, because I believe that there’s no such thing as an intelligent human and the difference between my brand of rudimentary intelligence and his brand isn’t too great on the scale of possible intelligence.

    But this article exemplifies the way people tend to think about AI and the Singularity, as they do all things, within certain schemata.

    Fox writes, “If the Singularity only affects one small group of humans, while the rest either can’t afford it or simply don’t care to participate, what happens to the transhumanist future the Singularity promises? Doesn’t the Singularity just set humanity up for another of the rich/poor, North/South problems it already deals with? Once again, its the other people, not the robots, that I worry about.”

    Fox, along with many others, can’t intuitively understand that there won’t be anything to “afford.” There won’t be any money. There won’t be any higher or lower classes. There won’t be any work to do. There won’t be any reason to hoard.

    Another schema is that of the mind as a causeless abstraction, rather than as e result of cause and effect processes within the brain. They imagine that AIs will just “naturally” have the same biological imperatives we have. They worry about scary, dictatorial AIs.

    Just a couple of examples of worrisome human traits:

    -The drive to acquire and hoard. A really ancient behavior which most people have. Some outliers are monstrously greedy, which I believe is a form of OCD. This is the result of a cause and effect process in the brain which in future years we will be able to describe rather precisely. Why would we deliberately and precisely copy this trait? Why would we build an AI with OCD? And it won’t “just happen” anymore than it would just happen that an AI would develop multiple-sclerosis.

    -The drive to dominate. Eliciting submissive behaviors from other people is rewarding. It’s a part of pack behavior. Some outliers find it very rewarding – they’re called bullies and sadists. Why would we build a sadistic AI?.

    But it isn’t just the less intelligent that may think in schemata. I think that Kurzweil is guilty of this in one instance. His answer to the Fermi Paradox is that we are the first intelligent race. Any other before us would already have filled the universe with intelligence. But the drive to reproduce, expand, and explore is a biological imperative. Why expand or explore?

    Others worry about humans using the power of machines for evil. Pre-Singularity that’s a more legitimate worry. That means that we should be thinking about and working on ethics and sanity just as hard as on intelligence. A big part of this is becoming explicitly aware that problems like greed, sadism and so on are consequences of recognizable cause and effect processes and not just a part of the “human condition” or the result of some hazy concept such as “evil.” We should all be able to immediately recognize sociopathy, narcissism, paranoia, sadism and obsessive acquisitiveness and not tolerate it. There will be tests for all of these conditions that will be as simple and incontrovertible as tests for color blindness. Even though people with these disorders typically deny that they have them, a test, plus social awareness, plus social pressure, plus really effective cures will cause people to seek the cures themselves. I’m looking for a saner future. Most people are pretty decent. Any non-pathological people will be able to control their negative behaviors.

  5. I have a couple of comments for your answers.

    Most critically, I believe you’re being overly idealistic and optimistic when you talk about AIs lacking the “inferential biases” that we humans have. Because AIs will (initially) be programmed by humans, they will most certainly have inferential biases – just different from the ones we have.

    For example, my phone won’t play music right now because it think it has “exceeded the maximum number of songs.” I know this isn’t true, and I know that the real problem is that it has trouble playing M4As and interprets this trouble as having exceeded its song limit. But nevertheless this is what it thinks.

    Now, this is just poor programming on a simple consumer device, but it’s about par for the course for software design. And the device is still functional, which is important. There’s no reason to believe that we could not create a functional, superintelligent AI that we still managed to program poorly enough to have these “inferential biases” you talk about.

    Making brains from scratch will just allow us to create new kinds of errors, because we still have to use our own dumb intelligence to create the AI. The extent to which we are able to successfully program self-awareness and self-diagnosis will determine how well an AI can rid itself of its biases.

    Even if we’re only programming the creation of genetic algorithms that program the rest of the AI’s brain, the opacity of the algorithms the program has to work with plays into how well it can root out bias.

    Later on you say that we wouldn’t have to program AIs to have self-preservation instincts, which would prevent the AIs from being selfish and evil; but you also reference “Basic AI Drives” which states that AIs would have self-preservation in order to maintain their utility functions. And since it’s their utility functions that are necessary for ensuring that they avoid “subgoal stomp,” that seems like it might be an issue.

    I should note that your stance on friendly AI appears inconsistent to me, given that you are also a vegetarian. If you’re okay with programming an AI – a sentient being with its own thoughts – so that it always wants to help us and never wants to hurt us, would you be okay with genetically engineering an animal so that it wanted to be eaten, a la The Restaurant at the End of the Universe?

    My final criticism is this: you say you “can’t wait for artificial intelligence to be created” because “it will make humans realize that we ain’t all that.” I humbly disagree. When that happens, I think we’ll think we’re gods for having created a new kind of intelligence. And then we’ll feel more like Titans once our new children destroy us.

    By the way, I mostly agree with your entire post and the points you make. I just think you’re off in a few places. Also, I’d never heard of the Azolla effect, so thanks for mentioning that.

  6. Benjamin Abbott

    eablair, you’re being overly optimistic to assume that the Singularity will abolish class differences and level consumption. Michael himself recently dismissed the idea of income equality as ridiculous. If people like him design our future singleton, we can only assume indefinite of economic hierarchy for its own sake regardless of scarcity. That’s not a future I accept, but if everything goes as planned my resistance won’t matter a bit.

    proto, I share your doubts about our ability to create artificial intelligence without inferential biases and so on. We should remember that we don’t have any examples of human-level intelligence outside of biology. The idea that we can create radically different thinking beings seems plausible but remains speculative. The SIAI approach isn’t the only one; AGI might instead come from close modeling of the human brain, as envisioned by Kurzweil.

    More fundamentally, I’m skeptical of Michael avowed clandestine strategy. This notion of independent researchers fashioning an impartial AI ruler under the radar of state power sounds awfully far-fetched. I suspect we’ll get a number different national and perhaps corporate AIs instead. These won’t be programmed for the benefit of the species, but rather for specific groups. They’ll both compete and cooperate alongside enhanced humans, these interactions crafting a world more complex and chaotic than one guided by a singleton.

  7. eablair

    Speculating about what happens after the Singularity kind of violates the whole concept, doesn’t it?

    I was mainly concerned with Stuart Fox’s idea that people will be left out of the post-human future because they won’t be able to afford it. He doesn’t understand the power of the technology.

    I still feel that even some of the brighter people can’t break out of the biological schemata. The idea of super-intelligences competing for resources or competing for anything I feel is absurd. An economy. Competition. Hierarchy. They can’t break away from those ideas that seem so “natural” but are based in our own biology.

    If your consciousness is based in a chunk of computronium how many resources do you really need? You can have your own simulated universe.

    I’m sure there will be different levels of intelligence but that’s not the same as an economic hierarchy. But there I go speculating about post-Singularity reality. It’s like an ape speculating about nuclear physics and imagining bananas. Lots of big juicy bananas everywhere.

  8. Benjamin Abbott

    Whether AIs compete depends on their goals and motivations, which could be anything. Remember that we have no scientific standard by which to judge desires. I think it’s naive to assume AIs will necessarily transcend human drives. We’re the only working model and we’ll be the ones crafting them. If governments make them, as I suspect they will, they’ll be tied to current human ideas about dominance and hierarchy. If they learn from our culture (another likely possibility), such notions will be hard to avoid.

    Furthermore, AIs with grand designs for the galaxy or beyond would have ample cause to fight over resources. Energy and matter are finite. While the consumption desires of today’s people are utterly trivial on the cosmic scale, AIs may dream much bigger.

    I personally imagine a world of diversity in both ability and motivation, with state AIs existing alongside independent ones and enhanced humans. I’m profoundly dubious of our ability to rigidly determine the views of our creations. That goes against the present nature of intelligent minds, which shift in often surprising ways based on the circumstances. I have trouble envisioning intelligence without adaption and unpredictability.

  9. Anastasis Germanidis

    >>What we see is not really “reality” — what we’re looking at is just the inside of our visual cortex.

    ^ This sentence essentially is equivalent to the sentence:

    “This sentence is wrong.”

  10. Adam

    Question 11:

    Where S is the “That Sucks” value of getting creamed by a truck, and t is the time from the singularity, what is lim t > 0 S(t)?

  11. fabulosa riamo de centinso y clamo con ssent picecas. metes a grado y prevabu bradiais con inhaba adento!

  12. Such a wonderful post! No idea how you came up with this post..it’d take me weeks. Well worth it though, I’d assume. Have you considered selling ads on your blog?

  13. It truly is truly a nice and valuable piece of info. I’m satisfied which you shared this beneficial details with us. Please stay us up to date like this. Thanks for sharing.

  14. im Up reading at 3 am because this blog is GREAT!

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>