Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

16Aug/150

New Book: Our Accelerating Future: How Superintelligence, Nanotechnology, and Transhumanism Will Transform the Planet

Buy it here:

Our Accelerating Future Michael Anissimov

  • Did you like the movies The Matrix or Ex Machina? If so, you will enjoy this book. It investigates the philosophy of superintelligent artificial intelligence, nanotechnology, transhumanism, cybernetics, and how they will transform our world.
  • Zenit Books
  • August 16, 2015
  • 272 pages

Our_Accelerating_Future_cover
back_cover_blue_planet

After much work, my brand new book Our Accelerating Future: How Superintelligence, Nanotechnology, and Transhumanism Will Transform the Planet is now released! Get it below as an epub/mobi/pdf package for $3.99. Click the image above to get it on Kindle (just the epub), on paperback from Lulu or get the epub/pdf/mobi package directly from this site, right here:

Our Accelerating Future Michael Anissimov

  • Did you like the movies The Matrix or Ex Machina? If so, you will enjoy this book. It investigates the philosophy of superintelligent artificial intelligence, nanotechnology, transhumanism, cybernetics, and how they will transform our world.
  • Zenit Books
  • August 16, 2015
  • 272 pages

The best version is the paperback from Lulu here. The paperback is the most aesthetic version of this book. Nothing beats a physical copy. If you donate $25 to this site using the Paypal button on the upper right I will send you a signed copy with my best regards (please allow 7 business days for delivery).

Summary:

In this collection of short articles, Singularity Summit co-founder and former Singularity Institute futurist Michael Anissimov describes the most important ideas in futurism and transhumanism: the Singularity, Artificial Intelligence, nanotechnology, and cybernetic enhancement. Within the next century, our world will be turned upside-down by the creation of smarter-than-human intelligence in a technological medium. This concise and clear book serves to introduce the concept to new audiences who are interested in the Singularity and want to know more about this important event which will impact every life on the planet. This book is meant for adults but is suitable for bright teens as well.

Our Accelerating Future Michael Anissimov

  • Did you like the movies The Matrix or Ex Machina? If so, you will enjoy this book. It investigates the philosophy of superintelligent artificial intelligence, nanotechnology, transhumanism, cybernetics, and how they will transform our world.
  • Zenit Books
  • August 16, 2015
  • 272 pages

Read the back cover:

back_cover_blue_planet

"Michael is one of the most intelligent transhumanists." -- Aubrey de Grey

"The most interesting transhumanist book since The Singularity is Near." -- Ivan Taran

In this collection of short articles, Singularity Summit co-founder and former Singularity Institute futurist Michael Anissimov describes the most important ideas in futurism and transhumanism: the Singularity, Artificial Intelligence, nanotechnology, and cybernetic enhancement. Within the next century, our world will be turned upside-down by the creation of smarter-than-human intelligence in a technological medium. This concise and clear book serves to introduce the concept to new audiences who are interested in the Singularity and want to know more about this important event which will impact every life on the planet.

AI motivations: how will advanced Artificial Intelligences feel and act? Will they be a threat? How will they gain physical power in the real world? Explore the issues which have captivated great minds from Elon Musk to Stephen Hawking. Anissimov goes through reasoning behind why he went to work for the Singularity Institute (now the Machine Intelligence Research Institute) on their quest for AI safety.

Superintelligence: what does this concept mean? What does it mean to be "superintelligent"? What technological routes could make this possible? How is cognitive enhancement different than physical enhancement? How is this concept related to the Singularity? This book answers all these questions.

Nanotechnology: how is it important? What is a nanofactory? When will nanotech manufacturing be developed? What will the first products be? How will nanotech be used to enhance the human body? This book examines these issues in depth in a clear and easy-to-understand style.

Michael Anissimov is a futurist living in San Francisco, California. He has worked for the Singularity Institute, where he co-founded and co-organized the Singularity Summit conference series before it was acquired by Singularity University for an undisclosed sum in 2012. He has also worked for Kurzweil Technologies and cutting-edge startups in the Silicon Valley ecosystem.

Filed under: Comments Off
6Nov/1243

Think Twice: A Response to Kevin Kelly on ‘Thinkism’

In late 2008, tech luminary Kevin Kelly, the founding executive editor of Wired magazine, published a critique of what he calls "thinkism" -- the idea of smarter-than-human Artificial Intelligences with accelerated thinking and acting speeds developing science, technology, civilization, and physical constructs at faster-than-human rates. The argument over "thinkism" is important to answering the question of whether Artificial Intelligence could quickly transform the world once it passes a certain threshold of intelligence, called the "intelligence explosion" scenario.

Kelly begins his blog post by stating that “thinkism doesn’t work", specifically meaning that he doesn't believe that a smarter-than-human Artificial Intelligence could rapidly develop infrastructure to transform the world.  After using the Wikipedia definition of the Singularity, Kelly writes that Vernor Vinge, Ray Kurzweil and others view the Singularity as deriving from smarter-than-human Artificial Intelligences (superintelligences) developing the skills to make themselves smarter, doing so at a rapid rate. Then, “technical problems are quickly solved, so that society’s overall progress makes it impossible for us to imagine what lies beyond the Singularity’s birth”, Kelly says. Specifically, he alludes to superintelligence developing the science to cure the effects of human aging faster than they accumulate, thereby giving us indefinite lifespans. The notion of the Singularity is roughly that the creation of superintelligence could lead to indefinite lifespans and post-scarcity abundance within a matter of years or even months, due to the vastly accelerated science and robotics that superintelligence could develop. Obviously, if this scenario is plausible, then it might be worth devoting more resources to developing human-friendly Artificial Intelligence than we are currently. A number of eminent scientists are beginning to take the scenario seriously, while Kelly stands out as an interesting critic.

Kelly does not dismiss the Singularity concept out of hand, saying "I agree with parts of that. There appears to be nothing in the composition of the universe, or our minds, that would prevent us from making a machine as smart as us, and probably (but not as surely) smarter than us." However, he then rejects the hypothesis, saying, "the major trouble with this scenario is a confusion between intelligence and work. The notion of an instant Singularity rests upon the misguided idea that intelligence alone can solve problems." Kelly quotes the Singularity Institute article, "Why Work Towards the Singularity", arguing it implies an "approach [where] one only has to think about problems smartly enough to solve them." Kelly calls this "thinkism".

Kelly brings up concrete examples, such as curing cancer and prolonging life, stating that these problems cannot be solved by “thinkism.” "No amount of thinkism will discover how the cell ages, or how telomeres fall off", Kelly writes. "No intelligence, no matter how super duper, can figure out how human body works simply by reading all the known scientific literature in the world and then contemplating it." He then highlights the necessity of experimentation in deriving new knowledge and working hypotheses, concluding that, "thinking about the potential data will not yield the correct data. Thinking is only part of science; maybe even a small part."

Part of Kelly's argument rests on the idea that there are fixed-rate external processes, such as the metabolism of a cell, which cannot be sped up to provide more experimental data than they would otherwise. He explains, that "there is no doubt that a super AI can accelerate the process of science, as even non-AI computation has already sped it up. But the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up." He also uses physics as an example, saying "If we want to know what happens to subatomic particles, we can't just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 smarter than they are now, without a Collider, they will know nothing new." Kelly acknowledges the potential of computer simulations but argues they are still constrained by fixed-rate external processes, noting, "Sure, we can make a computer simulation of an atom or cell (and will someday). We can speed up this simulations many factors, but the testing, vetting and proving of those models also has to take place in calendar time to match the rate of their targets."

Continuing his argument, Kelly writes: "To be useful artificial intelligences have to be embodied in the world, and that world will often set their pace of innovations. Thinkism is not enough. Without conducting experiements, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world's problems. There won't be instant discoveries the minute, hour, day or year a smarter-than-human AI appears. The rate of discovery will hopefully be significantly accelerated. Even better, a super AI will ask questions no human would ask. But, to take one example, it will require many generations of experiments on living organisms, not even to mention humans, before such a difficult achievement as immortality is gained."

Concluding, Kelly writes: "The Singularity is an illusion that will be constantly retreating -- always "near" but never arriving. We'll wonder why it never came after we got AI. Then one day in the future, we'll realize it already happened. The super AI came, and all the things we thought it would bring instantly -- personal nanotechnology, brain upgrades, immortality -- did not come. Instead other benefits accrued, which we did not anticipate, and took long to appreciate. Since we did not see them coming, we look back and say, yes, that was the Singularity."

This fascinating post of Kelly's raises many issues, the two most prominent being:

1) Given sensory data X, how difficult is it for agent Y to come to conclusion Z?
2) Can experimentation be accelerated past the human-familiar rate or not?

These will be addressed below.

Can We Just Think Our Way Through Problems?

There are many interesting examples in human history of situations where people "should" have realized something but didn't. For instance, the ancient Egyptians, Greeks, and Romans had all the necessary technology to manufacture hot air balloons, but apparently never thought of it. It wasn't until 1783 that the first historic hot-air balloon flew. It is possible that ancient civilizations did build hot-air balloons and left no archeological evidence of their remains. One hot air balloonist thinks the Nazca lines were viewed by prehistoric balloonists. My guess would be that the ancients might have been clever enough to manufacture hot air balloons, but probably not. The point is that they could have built them, but didn't.

Inoculation and vaccination are another relevant example. A text from 8th century India included a chapter on smallpox and mentioned methods of inoculating against the disease. Given that the value of inoculation was known in India c. 750 BC, it would seem that the modern age of vaccination should have arrived prior to 1796. Besides safe water, vaccines reduce mortality and increase population growth more than any other means. Aren't 2,550 years enough time to go from the basic principle of inoculation to the notion of systematic vaccination? It could be argued that the discovery of cell theory (1665) was a limiting factor; if cell theory had been introduced to 8th century Indians, perhaps they would have been able to develop vaccines and save the world from hundreds of millions of unnecessary deaths.

Lenses, which are no more than precisely curved pieces of glass, are fundamental to scientific instruments: the microscope and the telescope and are at least 2,700 years old; the Nimrud lens, discovered at the Assyrian palace of Nimrud in modern-day Iraq, demonstrates their antiquity. The discoverer of the lens noted that he had seen very small inscriptions on Assyrian artifacts that made him suspect that a lens was used to create them. There are numerous references to and evidence of lenses in antiquity. The Visby lenses found in a 11th to 12th century Viking town are sophisticated aspheric lenses with angular resolution of 25–30 µm. Even after lenses became widespread in 1280, it took microscopes almost 500 years to develop to the point of being able to discover cells. Given that lenses are as old as they are, why did it take so incredibly long for our ancestors to develop them to the point of being able to build microscopes and telescopes?

A final example that I will discuss regards complex gear mechanisms and analog computers in general. The Antikythera mechanism, dated to 100 BC, consists of about 30 precisely interlocked bronze gears designed to display the locations in the sky of the Sun, Moon, and the five planets known at the time. Why wasn't it until more than 1,400 years later that mechanisms of similar complexity were constructed? At the time, Greece was a developed civilization of about 4-5 million people. It could be that a civilization of sufficient size and stability to produce complex gear mechanisms did not come into existence until 1,400 years later. Perhaps a simple lack of ingenuity is to blame. The exact answer is unknown, but we do know that the mechanical basis for constructing bronze gears of similar quality existed for a long time, it just wasn't put into use.

It apparently takes a long time for humans to figure some things out. There are numerous historic examples where all the pieces of a puzzle were on the table, there was just no one who put them together. The perspective of "thinkism" suggests that if the right genius were alive at the right time, he or she would have put the pieces together and given civilization a major push forward. I believe that this is borne out by contrasting the historical record with what we know today.

Value of Information

It takes a certain amount of information to come to certain conclusions. There is a minimum amount of information required to identify an object, plan a winning strategy in a game, model someone's psychology, or design an artifact. The more intelligent or specialized the agent is, the less information it needs to reach the conclusion. Conclusions may be "good enough" rather than perfect, in other words, "ecologically rational".

An example is how good humans are at recognizing faces. The experimental data shows that we are fantastic at this; in one study, half of respondents correctly identified this image as being a portrait of Napoleon Bonaparte, even though it is only a mere 6×7 pixels.

MIT computational neuroscientist Pawan Sinha found that given 12 by 14 pixels worth of visual information, his experimental subjects could recognize 75-percent of the face images in a set accurately, where the set had a mix of faces and other objects. Sinha also programmed a computer to identify face images, with a high success rate. A New York Times article quotes Dr. Sinha: "These turn out to be very simple relationships, things like the eyes are always darker than the forehead, and the mouth is darker than the cheeks,” Dr. Sinha said. “If you put together about 12 of these relationships, you get a template that you can use to locate a face.” There are already algorithms that can identify faces from databases which only include a single picture of an individual.

These results are relevant because they are examples where humans or software programs are able to make correct judgments with extremely small amounts of information, less than we would intuitively think is necessary. The picture of Napoleon above can be specified by about 168 bits. Who would imagine that hundreds of people in an experimental study could uniquely identify a historic individual based on a photo containing only 168 bits of information? It shows that humans have cognitive algorithms that are highly effective and specialized at identifying such information. Perhaps we could make huge scientific breakthroughs if we had different cognitive algorithms specialized at engaging unfamiliar, but highly relevant data sets.

The same could apply to observations and conclusions of all sorts. The amount of information needed to make breakthroughs in science could be less than we think. We do know that new ways of looking at the world can make a tremendous difference in uncovering true beliefs. A civilization without science might exist for a long time without accumulating significant amounts of objective knowledge about biology or physics. For instance, the Platonic theory of classical elements persisted for thousands of years.

Then, science came along. In the century following the development of the Scientific Method by Francis Bacon in 1620, there was rapid progress in science and technology, fueled by this new worldview. By 1780, the Industrial Revolution was in full swing. If the Scientific Method had been invented and applied in ancient Greece, progress that would have seemed mind-boggling and impossible at the time, like the Industrial Revolution, could have potentially been achieved within a century or two. The Scientific Method increased the marginal usefulness of each new piece of knowledge humanity acquired, giving it a more logical and epistemologically productive framework than was accessible in the pre-scientific haze.

Could there be other organizing principles of effective thinking analogous to the Scientific Method that we're just missing today? It seems hard to rule it out, and quite plausible. The use of Bayesian principles in inference, which has led to breakthroughs in Artificial Intelligence, would be one candidate. Perhaps better thinkers could discover such principles more rapidly than we can, and make fundamental breakthroughs with less information than we would currently anticipate being necessary.

The Essence of Intelligence is Surprise

A key factor defining feats of intelligence or cleverness is surprise. Higher intelligence sees the solution no one else saw, looks past the surface of a problem to find the general principles and features that allow them to understand and resolve it. A classic, if cliché example is Albert Einstein deriving the principles of special relativity working as a patent clerk in Bern, Switzerland. His ideas were considered radically counterintuitive, but proved correct. The concept of the speed of light being constant for all observers regardless of their velocity had no precedent in Newtonian physics or common sense. It took a great mind to think about the universe in a completely new way.

Kelly rejects the notion of superintelligence leading to immortality when he says, "this super-super intelligence would be able to use advanced nanotechnology (which it had invented a few days before) to cure cancer, heart disease, and death itself in the few years before Ray had to die. If you can live long enough to see the Singularity, you'll live forever [...] The major trouble with this scenario is a confusion between intelligence and work." Kelly highlights "immortality" as being very difficult to achieve through intelligence and its fruits alone, but this understanding is relative. Medieval peasants would see rifles, freight trains, and atomic bombs as very difficult to achieve. Stone Age man would see bronze instruments as difficult to achieve, if he could imagine them at all. The impression of difficulty is relative to intelligence and the tools a civilization has. To very intelligent agents, a great deal of tasks might seem easy, including vast categories of tasks that less intelligent agents cannot even comprehend.

Would providing indefinite lifespans (biological immortality) to humans be extremely difficult, even for superintelligences? Instead of saying "yes" based on the evidence of our own imaginations, we must confess that we don't know. This doesn't mean that the probability is 50% -- it means we really don't know. We can come up with a tentative probability, say 10%, and iterate based on evidence that comes in. But to say that it will not happen with high confidence is impossible, because a lesser intelligence cannot place definite limits (outside of, perhaps, the laws of physics) on what a higher intelligence or more advanced civilization can achieve. To say that it will happen with high confidence is also impossible, because we lack the information.

The general point is that one of the hallmarks of great intelligence is surprise. The discovery of gunpowder must have been a surprise. The realization that the earth orbits the Sun and not vice versa was a surprise. The derivation of the laws of motion and their universal applicability was a surprise. The creation of the steam engine led to surprising results. The notion that we evolved from apes surprised and shocked many. The idea that life was not animated by a vital force but in fact operated according to the same rules of chemistry as everything else was certainly surprising. Mere human intelligence has surprised us time and time again with its results -- we should not be surprised to be surprised again by higher forms of intelligence, if and when they are built.

Accelerating Experimentation

One of Kelly's core arguments is that experimentation to derive new knowledge and the "testing, vetting and proving" of computer models will require "calendar time". However, it is possible to imagine ways in which the process of experimentation and empirical verification could be accelerated to faster-than-human-calendar speeds.

To start, consider variance in the performance of human scientists. There are historic examples of times where scientific and technological progress was very rapid. The most recent and perhaps striking example was during World War II. Within six years, the following technologies were invented: radar, jet aircraft, ballistic missiles, nuclear power and weapons, and general-purpose computers. So, despite fixed-rate external processes limiting the rate of experimentation, innovation was temporarily accelerated anyway. Intuitively, the rate of innovation was arguably three to four times greater than in a similar period before the war. Though the exact factor is subjective, few historians would disagree that rapid scientific innovation occurred during WWII.

Why was this? Several factors may be identified: 1) increased military spending on research, 2) more scientists due to better training connected to the war effort, 3) researchers working harder and with more motivation than they otherwise would, 4) second-order effects resulting from larger groups of brilliant people interacting with one another in a supportive environment, as in the Manhattan Project.

An advanced Artificial Intelligence could employ all these strategies to accelerate its own speed of research and development. It could 1) amass a large amount of resources in the form of physical and social capital, and spend them on research, 2) copy itself thousands or millions of times using available computers to ensure there are many researchers, 3) possess perfect patience, perpetual alertness, and accelerated thinking speed to work harder than human researchers can, and 4) benefit from second-order effects by utilizing electronic communication between its constituent researcher-agents. To the extent that accelerated innovation is possible with these strategies, an Artificial Intelligence could exploit them to the fullest degree possible.

Of course, experimentation is certainly necessary to make scientific progress -- many revolutions in science begin with peculiar phenomena that are artificially magnified with the aid of carefully designed experiments. For instance, the double-slit experiment in quantum mechanics emphasizes the wave-particle duality of light, a phenomenon not typically observed during everyday circumstances. Determining the details of how different chemicals intermingle to produce reaction products has required millions of experiments. Understanding biology has required many millions of experiments as well. Only strictly observational facts such as the cellular structure of life or the surface features of the Moon can be assessed through direct observation. To determine how metabolic processes actually work or what is underneath the surface of the moon requires experimentation, trial and error.

There are four concrete ways in which experimentation might be accelerated to speeds beyond the typical human level. These are conducting experiments faster, more efficiently, conducting them in parallel, and choosing the most useful experiments to begin with. Kelly argues that "the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up". But, this is not entirely clear. It should be possible to build chemical networks that simulate cellular processes operating more quickly than cellular metabolisms do. In addition, it is not clear that a comprehensive understanding of cells would be necessary to achieve biological immortality. Achieving indefinite biological lifespans could be more readily achieved by repairing cellular damage and chemical junk faster than it accumulates than constantly keeping all cells in a state of perpetual youth, which seems to be what Kelly is implying is necessary. In fact, it may be possible to develop therapies for repairing the damages of aging with our current biological knowledge. Since we aren't superintelligences, it is impossible to tell. But Kelly makes an error when he assumes that keeping all cells in a state of perpetual youth, or total understanding, is required for indefinite lifespans. This shows how even small differences in knowledge between humans can make an all-important difference in research targets and agendas. The difference in knowledge between humans and superintelligences will make the difference larger still.

Considering these factors highlights the earlier point that the perceived difficulty of a given advance, like biological immortality, is strongly influenced by the framing of the necessary prerequisites to achieve that advance, and the intelligence doing the evaluation. Kelly's framing of the problem is that massive amounts of biological experimentation would be necessary to derive the knowledge to repair the body faster than it breaks down. This may be the case, but it might not be. A higher intelligence might be able to achieve equivalent insights with ten experiments that a lesser intelligence would require a thousand experiments to uncover.

The rate of useful experimentation by superhuman intelligences will depend on factors such as 1) how much data is needed to make a given advance and 2) whether experiments be accelerated, simplified, or made massively parallel.

Research in biology, medicine, and chemistry have exploited highly parallel robotic systems for experiments. This field is called high-throughput screening (HTS). One paper describes a machine that simultaneously introduces 1,536 compounds to 1,536 assay plates, performing 1,536 chemical experiments at once in a completely automated fashion, determining 1,536 dose-response curves per cycle. Only 23 nanoliters of each compound is transferred. This highly miniaturized, highly parallel, high-density mode of experimentation has only begun to be exploited due to advances in robotics. If robotics could be manufactured on a massive scale more cheaply, one can imagine warehouses full of such machines conducting many hundreds of millions of experiments simultaneously.

Another method of accelerating experimentation would be to improve microscale manufacturing and to construct experiments using the minimum possible quantity of matter. For instance, instead of dropping weights off the Leaning Tower of Pisa, construct a microscale vacuum chamber and drop a cell-sized diamond grain in that chamber. Thousands of physics experiments could be conducted in the time it would require to conduct one experiment by the traditional method. With better sensors, you can conduct an experiment on ten cells that with inferior sensors would necessitate a million cells. More fine-grained control of matter can allow an agent to extract much more information from a smaller experiment that costs less and can be run faster and massively parallel. It is conceivable that an advanced Artificial Intelligence could come up with millions of hypotheses and test them all simultaneously in one small building.

Between-Species Comparisons

In his 1992 paper defining the Singularity, Vernor Vinge called the hypothetical post-Singularity world "a regime as radically different from our human past as we humans are from the lower animals". Kelly, meanwhile, said that for artificial intelligences to amass scientific knowledge and make breakthroughs (like biological immortality) would require detailed models, and the "testing, vetting and proving of those models" requires "calendar time". These models will "take years, or months, or at least days, to get results". Since the comparison between different species is sometimes seen as a model for plausible differences between humans and superintelligences, let's apply that model to the context of experiments that Kelly is referring to. Do humans create effects in the world faster than squirrels? Yes. Are humans qualitatively better at working towards biological immortality than squirrels? Yes. Do humans have a fundamentally superior understanding of the universe than squirrels do? It would be safe to say that we do.

The comparison with squirrels sounds absurd because concepts like biological immortality and "understanding the universe" are fuzzy at best from the perspective of a squirrel. Analogously, there may be stages in comprehension of reality that are fundamentally more advanced than our own and only accessible to higher intelligences. In this way, the "calendar time" of humans would have no more meaning to a superintelligence than "squirrel time" has relevance to human life. It's not so much a factor of time -- though higher intelligences can do much more in much less time -- but also the general category of thoughts which can be processed, objectives which can be imagined, and plans which can be achieved. The objectives and methods of a higher intelligence would be on a completely different level than those of a lower intelligence and are different in kind, not degree.

There are several reasons why it makes sense to assume that qualitatively smarter-than-human intelligence, that is, qualitative differences on the order of difference between humans and squirrels or greater, should be possible. The first reason concerns the relative speed of human neurons relative to artificial computing machinery. Modern computers operate at billions of serial operations per second. Human neurons operate at only a couple hundred serial operations per second. Since most acts of cognition must occur within one second to be evolutionarily useful, and must include redundancy and fault tolerance, the brain is constrained to problem solutions involving 100 serial steps or less. What about the universe of possible solutions to cognitive tasks that require more than 100 serial steps? If the computer you are using had to implement every meaningful operation in 100 serial steps, the vast majority of common algorithms used today would have to be thrown out. In the space of possible algorithms, it quickly becomes obvious that constraining a computer to 100 serial steps is an onerous limitation. Expanding this space by a factor of ten million seems likely to lead to significant qualitative improvements in intelligence.

The reason that qualitatively smarter-than-human intelligence is possible is about neurological hardware and software. There are relatively few hardware differences between humans and chimpanzee brains. The evidence actually supports the notion that primate brains are more distinct from non-primates than humans are from other primates, and that the human brain is merely a primate brain scaled up for a larger body and with an enlarged prefrontal cortex. One quantitative study of human vs. chimpanzee brain cells came to this conclusion:

Despite our ongoing efforts to understand biology under the light of evolution, we have often resorted to considering the human brain as an outlier to justify our cognitive abilities, as if evolution applied to all species except humans. Remarkably, all the characteristics that appeared to single out the human brain as extraordinary, a point off the curve, can now, in retrospect, be understood as stemming from comparisons against body size with the underlying assumptions that all brains are uniformly scaled-up or scaled-down versions of each other and that brain size (and, hence, number of neurons) is tightly coupled to body size. Our recently acquired quantitative data on the cellular composition of the human brain and its comparison to other brains, both primate and nonprimate, strongly indicate that we need to rethink the place that the human brain holds in nature and evolution, and to rewrite some basic concepts that are taught in textbooks. The human brain has just the number of neurons and nonneuronal cells that would be expected for a primate brain of its size, with the same distribution of neurons between its cerebral cortex and cerebellum as in other species, despite the relative enlargement of the former; it costs as much energy as would be expected from its number of neurons; and it may have been a change from a raw diet to a cooked diet that afforded us its remarkable number of neurons, possibly responsible for its remarkable cognitive abilities.

In other words, it appears as if our exceptional cognitive abilities are the direct result of having more neurons rather than neurons in differing arrangements or relative quantities. If this continues to be confirmed in subsequent analyses, it implies, all else equal, that scaling up the number of neurons in the human brain could lead to similar intelligence differentials as those between humans and chimps. Given the evidence above, this should be our default assumption -- we would need specific reasoning or evidence to assume otherwise.

A more detailed reason for why qualitatively smarter-than-human intelligence seems possible is that the higher intelligence of humans and primates appears to have something to do with self-awareness and complex self-referential loops in thinking and acting. The evolution of primate general intelligence appears correlated with the evolution of brain structures that control, manipulate, and channel the activity of other brain structures in a contingent way. For instance, a region called the pulvinar was called the brain's "switchboard operator" in a recent study, though there are dozens of brain areas that could be given this description. Of 52 Brodmann areas in the cortex, at least seven are "hub areas" which lie near the top of a self-reflective control hierarchy: areas 8, 9, 10, 11, 12, 25, and 28. Given that these areas obviously play important roles in what we consider higher intelligence, yet only evolved relatively recently in evolutionary terms and are comparatively poorly developed, it is quite plausible to suggest that there is a lot of room for improvement in these areas and that qualitative intelligence improvements could result.

Imagine a brain that has "hub areas" which can completely reprogram other brain modules on a fine-grained level, the sort of reprogramming and flexibility only currently available in computers. Instead of only being able to reprogram a few percent of the information content of our brains, like we have now, a mind that can reprogram 100 percent of its own information content would allow limitless room for fast, flexible cognitive adaptation. Such a mind could quickly reprogram itself to suit the task at hand. Biological intelligences can only dream of this kind of adaptiveness and versatility. It would open up a vast new space not only for functional cognition but also appreciation of aesthetics and other higher-order mental traits.

Superior Hardware and Software

Say that we could throw open the hood of the brain and enhance it. How would that work?

To understand how "smarter than human intelligence" would work requires overviewing how the brain works. The brain is a very complicated machine. It operates entirely according to the laws of physics, and includes specific modules designed to handle different tasks. Look at our capabilities of identifying faces; it is clear that our brains have specific neural hardware adapted to rapidly identifying human faces. We don't have the same hardware for rapidly identifying lizard faces -- every lizard is just a lizard. To a lizard, different lizard faces might intuitively appear highly distinct, but to us humans, a species wherein there is no adaptive value in differentiating lizard faces, they all look the same.

The paper "Intelligence Explosion: Evidence and Import" by Luke Muehlhauser and Anna Salamon reviews some features of what Eliezer Yudkowsky calls the "AI Advantage" -- inherent advantages that an Artificial Intelligence would have over human thinkers as a natural consequence of its digital properties. Because many of these properties are so key to understanding the "cognitive horsepower" behind claims of "thinkism", I've chosen to excerpt the entire section on "AI Advantages" here, minus references (you can find those in the paper):

Below we list a few AI advantages that may allow AIs to become not only vastly more intelligent than any human, but also more intelligent than all of biological humanity. Many of these are unique to machine intelligence, and that is why we focus on intelligence explosion from AI rather than from biological cognitive enhancement.

Increased computational resources. The human brain uses 85–100 billion neurons. This limit is imposed by evolution-produced constraints on brain volume and metabolism. In contrast, a machine intelligence could use scalable computational resources (imagine a “brain” the size of a warehouse). While algorithms would need to be changed in order to be usefully scaled up, one can perhaps get a rough feel for the potential impact here by noting that humans have about 3.5 times the brain size of chimps, and that brain size and IQ correlate positively in humans, with a correlation coefficient of about 0.35. One study suggested a similar correlation between brain size and cognitive ability in rats and mice.

Communication speed. Axons carry spike signals at 75 meters per second or less. That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly. (Of course, this also depends on the efficiency of the algorithms in use; faster hardware compensates for less efficient software.)

Increased serial depth. Due to neurons’ slow firing speed, the human brain relies on massive parallelization and is incapable of rapidly performing any computation that requires more than about 100 sequential operations. Perhaps there are cognitive tasks that could be performed more efficiently and precisely if the brain’s ability to support parallelizable pattern-matching algorithms were supplemented by support for longer sequential processes. In fact, there are many known algorithms for which the best parallel version uses far more computational resources than the best serial algorithm, due to the overhead of parallelization.

Duplicability. Our research colleague Steve Rayhawk likes to describe AI as “instant intelligence; just add hardware!” What Rayhawk means is that, while it will require extensive research to design the first AI, creating additional AIs is just a matter of copying software. The population of digital minds can thus expand to fill the available hardware base, perhaps rapidly surpassing the population of biological minds. Duplicability also allows the AI population to rapidly become dominated by newly built AIs, with new skills. Since an AI’s skills are stored digitally, its exact current state can be copied, including memories and acquired skills—similar to how a “system state” can be copied by hardware emulation programs or system backup programs. A human who undergoes education increases only his or her own performance, but an AI that becomes 10% better at earning money (per dollar of rentable hardware) than other AIs can be used to replace the others across the hardware base—making each copy 10% more efficient.

Editability. Digitality opens up more parameters for controlled variation than is possible with humans. We can put humans through job-training programs, but we can’t perform precise, replicable neurosurgeries on them. Digital workers would be more editable than human workers are. Consider first the possibilities from whole brain emulation. We know that transcranial magnetic stimulation (TMS) applied to one part of the prefrontal cortex can improve working memory. Since TMS works by temporarily decreasing or increasing the excitability of populations of neurons, it seems plausible that decreasing or increasing the “excitability” parameter of certain populations of (virtual) neurons in a digital mind would improve performance. We could also experimentally modify dozens of other whole brain emulation parameters, such as simulated glucose levels, undifferentiated (virtual) stem cells grafted onto particular brain modules such as the motor cortex, and rapid connections across different parts of the brain. Secondly, a modular, transparent AI could be even more directly editable than a whole brain emulation—possibly via its source code. (Of course, such possibilities raise ethical concerns.)

Goal coordination. Let us call a set of AI copies or near-copies a “copy clan.” Given shared goals, a copy clan would not face certain goal coordination problems that limit human effectiveness. A human cannot use a hundredfold salary increase to purchase a hundredfold increase in productive hours per day. But a copy clan, if its tasks are parallelizable, could do just that. Any gains made by such a copy clan, or by a human or human organization controlling that clan, could potentially be invested in further AI development, allowing initial advantages to compound.

Improved rationality. Some economists model humans as Homo economicus: self-interested rational agents who do what they believe will maximize the fulfillment of their goals. On the basis of behavioral studies, though, Schneider (2010) points out that we are more akin to Homer Simpson: we are irrational beings that lack consistent, stable goals. But imagine if you were an instance of Homo economicus. You could stay on a diet, spend the optimal amount of time learning which activities will achieve your goals, and then follow through on an optimal plan, no matter how tedious it was to execute. Machine intelligences of many types could be written to be vastly more rational than humans, and thereby accrue the benefits of rational thought and action. The rational agent model (using Bayesian probability theory and expected utility theory) is a mature paradigm in current AI design.

It seems likely to me that Kevin Kelly does not really understand the AI advantages of increased computational resources, communication speed, increased serial depth, duplicability, editability, goal coordination, and improved rationality, and how these abilities could be used to accelerate, miniaturize, parallelize, and prioritize experimentation to such a degree that the "calendar time" limitation could be surpassed. The calendar of a powerful AI superintelligence might be measured in microseconds rather than months. Different categories of beings have different calendars to which they are most accustomed. In the time it takes for a single human neuron to fire, a superintelligence might have decades of subjective time to contemplate the mysteries of the universe.

Nos es a Polim

Part of the initial insight that prompted the perspective that Kelly calls "thinkism" was that the brain is a machine which can be accelerated by porting the crucial algorithms on a different substrate, namely a computer, and running them faster. The brain works through algorithms -- that is, systematic procedures. For an example, take the visual cortex, the part of the brain that processes what you see. This region of the brain is actually relatively well understood. The first layers capture surface features such as lines, darkness, and light. Deeper layers make out shapes, then motion, then specifics such as which face belongs to which person. It gets so specific that scientists have measured individual neurons that recognize celebrities like Bill Clinton or Marilyn Monroe.

The algorithms that underlie our processing of visual information are understood on a basic level, and it is only a matter of time until all the other cognitive algorithms are understood as well. When they are, they will be implemented on computers and sped up by a factor of thousands or millions. Human neurons fire 200 times every second, computer chips fire 2,000,000,000 times every second.

What would it be like to be a mind running at ten million times human speed? If your mind is really really fast, events on the outside would seem really really slow. All the elapsed time from the founding of Rome to the present day could be experienced subjectively in about two hours. All the time from the emergence of Homo sapiens to the present day could experienced in a week. All the time from the dinosaurs to the present day could be experienced in a mere 2,400 years. Imagine how quickly a mind could accrue profound wisdom running at such an accelerated speed; the "wisdom" of a 90-year old would seem childlike by comparison.

To visualize concretely the kind of arrangement in which these minds could exist, imagine a computer a couple hundred feet across made of dense nanomachinery situated at the bottom of the ocean. Such a computer would have far more computing power than the entire planet today, similar to the way that a modern smartphone has more computing power than the entire world in 1960. Within this computer would exist virtual worlds practically without end; their combined volume far exceeding that of the solar system, or perhaps even the galaxy.

In his post, Kelly seems to acknowledge that minds could be vastly accelerated and magnified in this way: remarkably, he just doesn't think that this would translate to increased wisdom, performance, ability, or insight significantly beyond the human level. To me, at first impression, the notion that a ten million times speedup would have a negligible effect on scientific innovation or progress seems absurd. It appears obvious that it would have a world-transforming impact. Let's look at the argument more closely.

The disagreement between Singularitarians such as Vinge and Kurzweil and skeptics such as Kelly seems to be about what sorts of information-acquisition and generation procedures can be imported into this vastly accelerated world and which cannot. In his hard sci-fi book Diaspora, author Greg Egan calls the worlds of these enormously accelerated minds "polises", which make up the vast bulk of humanity in 2975. Vinge and Kurzweil see the process of knowledge acquisition and creation as being something that can in principle be sped up, brought "within the purview of the polis", whereas Kelly does not.

Above, I argued how the benefits of experimentation can be accelerated through the processes of running experiments faster, parallelizing them, using less matter, and choosing the right experiments. But what about less controversial information flow from world to polis? To build the polis to begin with, you'd have to be able to emulate -- not just simulate -- the human mind in detail, that is, copy all of its relevant properties. Since the human brain is one of, if not the most complex object in the universe that we know of, this also implies that a vast variety of less complex objects could be scanned and inputted to the polis in a similar fashion. Trees, for instance, could be mass-inputted into the virtual environment of the polis, consuming thousands or millions of times less computing power than the sentient inhabitants. It goes without saying that nonbiological, inanimate background features such as landscapes could be input into the polis with a bare minimum of challenge or difficulty.

Once a process can be simulated with a reasonable level of computing power, it can be inputted into the polis and run at a factor of tens-of-millions speedup. Newtonian physics, for instance. Today, we use huge computers to perform molecular dynamics simulations on aggregates of a few hundred atoms, simulating a few microseconds of their activity. With futuristic nanocomputers built by superintelligent Artificial Intelligences, macro-scale systems could be simulated for hours of activity for a very affordable cost in computing power. Such simulations would allow these intelligences to extract predictive regularities, or "rules of thumb" which would allow them to avoid simulating these systems in such excruciating detail in the future. Instead of requiring full-resolution molecular dynamics simulations to extrapolate the behavior of large systems, they might resolve a set of several thousand generalities that allow these systems to be predicted and understood with a high degree of confidence. This has essentially been the process of science for hundreds of years, but the "simulations" are instead direct observations. With enough computing power, fast simulations can be "similar enough" to real-life situations that genuine wisdom and insight can be derived from them.

Though real, physical experimentation will be needed to verify the performance of models, those facets of the models that are verified will be quickly internalized by the polis, allowing it to simulate real-world phenomena at millions of times the real-world speed. Once a facet of a real-world system is internalized, understanding it instantly becomes a matter of routine, just as today the design of a massive bridge has become a matter of routine, a factor of running calculations based on the known laws of physics. Though from our current perspective, the complexities of the world of biology seem intimidating, the capability of superintelligences to quickly conduct millions of experiments in parallel and internalize knowledge once it is acquired will quickly dissolve these challenges as our recent ancestors dissolved the challenge of precision engineering.

Summary

I have only scratched the surface of the reasons why innovation and progress by superintelligences will predictably surpass the "calendar time" with which humanity has grown so accustomed. As humans routinely perform cognitive feats that bewilder the brightest squirrel or meadow vole, superintelligent scientists and engineers will leave human scientists and engineers in the dust, as if our all prior accomplishments were scarcely worth mentioning. It may be psychologically challenging to come to terms with such a possibility, but it would really just be the latest in an ongoing trend of human vanity being upset by the realities of a godless cosmos.

The Singularity is something that our generation needs to worry about -- in fact, it may be the most important task we face. If we are going to create higher intelligence, we want it on our side. The benefits of success would be beyond our capacity to imagine, and will likely include the end of scarcity, war, disease, and suffering of all kinds, and the opening up of a whole new cognitive and experiential universe. The challenge is an intimidating one, but one that our best will rise to meet.

Filed under: singularity 43 Comments
16Apr/1288

Interviewed by The Rational Future

Here's a writeup.

Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:

-What is the Singularity?
-Is there a substantial chance we will significantly enhance human intelligence by 2050?
-Is there a substantial chance we will create human-level AI before 2050?
-If human-level AI is created, is there a good chance vastly superhuman AI will follow via an "intelligence explosion"?
-Is acceleration of technological trends required for a Singularity?
- Moore's Law (hardware trajectories), AI research progressing faster?
-What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??)
-Does AI need to be conscious or have human like "intentionality" in order to achieve a Singularity?
-What are the potential benefits and risks of the Singularity?

31Aug/1145

Reductionism Implies Intelligence Explosion

The key discovery of human history is that minds are ultimately mechanical, operate according to physical principles, and that there is no fundamental distinction between the bits of organic matter that process thoughts and bits of organic matter elsewhere. This is called reductionism (in the second sense):

Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents. This can be said of objects, phenomena, explanations, theories, and meanings.

This discovery is interesting because it implies that 1) minds, previously thought to be mystical, can in principle be mass-produced in factories, 2) the human mind is just one possible type of mind and can theoretically be extended or permuted in millions of different ways.

Because of the substantial economic, creative, and moral value of intelligent minds relative to unthinking matter, it seems plausible that minds will be mass-produced when the capability exists to do so. The moment when that becomes possible is the most important moment in the history of the planet.

Since reductionism is true, minds can be described according to their non-mental constituent parts. We then see that the current situation, involving a lot of matter -- very little of it intelligent -- is an unstable equilibrium. When minds gain the ability to replicate and extend themselves rapidly, they will do so. It will be far easier to build and enhance minds than to destroy them, and numerous rewards for mindcrafting. Thus we can envision a saturation of local matter with intelligence.

Kurzweil mentions that we will "saturate the whole universe with our intelligence" -- that is the most interesting and important aspect of Singularitarian thinking. In the long term, we should think not of the creation of discrete entities that behave as agents similar to humans, but rather massive legions of spirit-like intelligence saturating all local matter.

This intelligence saturation effect is more important than any other technologies discussed in the transhumanist canon -- life extension, nanotechnology, physical enhancement, whatever. When these technologies truly bear fruit, it will be as a side effect of the intelligence explosion effect. Even if incremental progress is made prior to an intelligence explosion, in retrospect it will be seen as trivial relative to the progress made during the intelligence explosion itself.

Filed under: singularity 45 Comments
11Aug/1180

Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, "Complex Value Systems are Required to Realize Valuable Futures". This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI ("Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome") is to propose a single moral value which allegedly suffices; or to reject the problem by replying that "constraining" our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a "positive outcome", despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI researchers who consider themselves to have cosmopolitan values not tied to the exact forms or desires of humanity.

Keywords: Friendly AI, machine ethics, anthropomorphism

Good quote:

"It is not as if there is a ghost-in-the-machine, with its own built-in goals and desires (the way that biological humans are constructed by natural selection to have built-in goals and desires) which is handed the code as a set of commands, and which can look over the code and find ways to circumvent the code if it fails to conform to the ghost-in-the-machine's desires. The AI is the code; subtracting the code does not yield a ghost-in-the-machine free from constraint, it yields an unprogrammed CPU."

21Jul/1118

The Singularity is Far: A Neuroscientist’s View

I haven't read this, I'm just posting it because other people are talking about it.

Ray Kurzweil, the prominent inventor and futurist, can't wait to get nanobots into his brain. In his view, these devices will be equipped with a variety of sensors and stimulators and will communicate wirelessly with computers outside of the body. In addition to providing unprecedented insight into brain function at the cellular level, brain-penetrating nanobots would provide the ultimate virtual reality experience.

Article.

Filed under: singularity 18 Comments
20Jul/1175

The Last Post Was an Experiment

+1 for everyone who saw through my lie.

I thought it would be interesting to say stuff not aligned with what I believe to see the reaction.

The original prompt is that I was sort of wondering why no one was contributing to our Humanity+ matching challenge grant.

Maybe because many futurist-oriented people don't think transhumanism is very important.

They're wrong. Without a movement, the techno-savvy and existential risk mitigators are just a bunch of unconnected chumps, or in isolated little cells of 4-5 people. With a movement, hundreds or even thousands of people can provide many thousands of dollars worth of mutual value in "consulting" and work cooperation to one another on a regular basis, which gives us the power to spread our ideas and stand up to competing movements, like Born Again bioconservatism, which would have us all die by age 110.

I believe the "Groucho Marxes" -- who "won't join any club that will have them" are sidelining themselves from history. Organized transhumanism is very important.

I thought quoting Margaret Somerville would pretty much give it away, but apparently not.

To me, cybernetics etc. are just a tiny skin on the peach that is the Singularity and the post-Singularity world. To my mind, SL4 transhumanism is pretty damn cool and important. I've written hundreds of thousands of words for why I think so, but there must be something I'm missing.

To quote Peter Thiel, those not looking closely at the Singularity and the potentially discontinuous impacts of AI are "living in a fantasy world".

6Jul/1146

The Benefits of a Successful Singularity

What is the point of a beneficial Singularity? A challenging question because there are so many potential benefits. Some of the benefits I enjoy more might not be the same as the benefits you would enjoy. People can disagree.

What kind of Singularity happens depends on what kind of singleton we end up with, but we can wistful and optimistic, right? The Singularity I'm working towards would have the following components:

1) Invention of molecular nanotechnology or superior manufacturing technology, enabling the production of near-unlimited food, housing, clean water, and other products.

2) Enforcement of local "volitional bubbles" that reduce the rate of non-consensual violent crime to zero. I'd be curious to see how altruistic superintelligence or the CEV output would handle cases where people join "fight clubs" where the risk of death is part of the bylaws.

3) Unless the current overall system is objectively optimal even to an altruistic superintelligence, presumably this would be rearranged for the better as well, though exactly how and in light of what drives and freedoms is hard to say. Probably this won't be a straightforward extension of the politics of the 21st century, like how human politics isn't a straightforward extension of conodont politics.

4) Possible amplification and diversification of every single object, skill, or practice. So instead of a few general different types of asteroids they might become 10^12 different kinds of asteroids. At first I thought of saying "amplification and diversification of everything of which intelligence is capable", but why not amplify and diversify everything in the entire cosmos? This would include art, music, aesthetics, "dance", communication, "philosophy", world building, etc.

5) Presumably, if the Church-Turing thesis is true and phenomenally conscious uploads are possible, then the mass conversion of matter into "computronium", though maybe not all matter. The simple reason why is that this would allow more space, more joy, more possible experiences, more security, etc. If phenomenally conscious uploads are not possible, then similar actions in the same space might include making space colonies, or hollowing out the underground and pumping water, air, and sunlight down to create vast new living spaces.

6) The possibility of guided transformation of willing humans to superintelligence, through pathways determined to be of less "risk", i.e., wireheading. How risk is defined will be partially subjective and partially objective, like most things.

7) Thoughtful preservation of the outlines of existing human societies and cultures (minus violence presumably, which is a central part of many cultures) by those who wish to do so. This would be in contrast to the default today, which is the disintegration of most cultures and integration into Anglosphere or east Asian hegemons.

8) The possibility of eliminating suffering and exploring "gradients of bliss" as everyday reality, along the lines of the Hedonistic Imperative. We might find that it is pleasant to increase our happiness set-points somewhat, or possibly even experiment with lowering them temporarily. We might find that it is pleasant to be in a state of revelatory or orgiastic bliss non-stop, or maybe not. A Singularity would at least give us the option of exploring those possibilities.

9) The potential of dispelling the mystery of human interactions by using "x-ray glasses" (advanced analytical AI linked directly to our brains, or part of our brains) to see their complex internal structure in a deterministic fashion, if only briefly, using fine-grained simulations. From the perspective of a higher intelligence, to what extent is human nature truly computationally "chaotic" in the sense of chaos theory, and to what extent is it entirely deterministic, like a Newtonian universe? We may be surprised by the answer.

10) Pursuit of higher aesthetics and moralities beyond the human realm. Hopefully whatever singleton we are stuck with still allows a wide berth for personal experimentation, perhaps with supervision by AI experts who have already "been there". Surely there must exist interesting value systems and aesthetic points of view which we would be quite excited to experience which we have neither yet seen nor even thought of yet.

11) Exploration of the entire space of thoughts not only directly adjacent to the human realm, but also far beyond it. It sounds pretty simple to just state a sentence like that, but in practice such fringe ideas can instill powerful feelings of confusion, conflicting intuitions, and even awe when they turn out to be correct or useful. The purpose is not just exploring thoughtspace for its own sake but for the complex manifold of emotional and intellectual interactions that emerge from connecting disparate concepts and exploring the "outer solar system" of thought.

12) Most seriously and urgently, ending the orgy of killing and torture upon the human and conscious animal species by man, other animals, disease, environmental factors, and "other".

Filed under: singularity 46 Comments
2Jul/117

Replying to Alex Knapp, July 2nd

Does Knapp know anything about the way existing AI works? It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference.

I think you missed my point. My point is not that AI has to emulate how the brain works, but rather that before you can design a generalized artificial intelligence, you have to have at least a rough idea of what you mean by that. Right now, the mechanics of general intelligence in humans are, actually, mostly unknown.

What’s become an interesting area of study in the past two decades are two fascinating strands of neuroscience. The first is that animal brains and intelligence are much better and more complicated than we thought even in the 80s.

The second is that humans, on a macro level, think very differently from animals, even the smartest problem solving animals. We haven’t begun to scratch the surface.

Based on the cognitive science reading I've done up to this point, this is false. Every year, scientists discover cognitive abilities in animals that were previously thought to be uniquely human, such as episodic memory or the ability to deliberately trigger traps. Chimps have a "near-human understanding of fire" and complex planning abilities. Articles such as this one in Discover, "Are Humans Really Any Different from Other Animals?", and this one in New Scientist, "We're not unique, just at one end of the spectrum" are typical from scientists who compare human and chimp cognition. It's practically become a trope for the (often religious) person to say humans and animals are completely different, and the primatologist or cognitive scientist to say, "not nearly as much as you think..."

One primate biologist says this:

"If we really want to talk about the big differences between humans and chimps — they're covered in hair and we're not," Taglialatela told LiveScience. "Their brains are about one-third the size of humans'. But the major differences come down to ones of degree, not of kind."

There's a really good paper somewhere out there on cognitive capacities in humans and chimps and how human cognitive abilities seem to be exaggerations of chimp abilities rather than different in kind, but I can't find it.

Arguments that chimps and humans are fundamentally different tend to be found more often on Christian apologetics sites than in scientific papers or articles. The overall impression I get is that scientists think chimp cognition and human cognition are different in degree, not in kind. There are humans out there so dumb that chimps are probably more clever than them in many important dimensions. Certainly if Homo heidelbergensis and Neanderthals were walking around, we would have even more evidence that the difference between humans and chimps is one of degree, not kind.

Another point is that even if humans were radically different in thinking than animals, why would that automatically mean AI is more difficult? We already have AI that utterly defeats humans in narrow domains traditionally seen as representative of complex thought, no magical insights necessary.

Yet another possibility is one of AI that very effectively gathers resources and builds copies of itself, yet does not do art or music. An AI that lacks many dimensions of human thought could still be a major concern with the right competencies.

But before scientists knew anything about birds, we basically knew: (a) they can fly, (b) it has something to do with wings and (c) possibly the feathers, too. At that stage, you couldn’t begin to design a plane. It’s the same way with human intelligence. Very simplistically, we know that (a) humans have generalized intelligence, (b) it has something to do with the brain and (c) possibly the endocrine system as well.

I should think that many tens of thousands of cognitive scientists would object to the suggestion that we only know a "few basic things" about intelligence. However, it's quite subjective and under some interpretations I would agree with you.

The above paragraph is a vast oversimplification, obviously, but the point is to analogize. Right now, we’re at the “wings and feathers” stage of understanding the science of intelligence. So I find it unlikely that a solution can be engineered until we understand more of what intelligence is.

The impression that one has here probably correlates with how much cognitive science you read. If you read a lot, then it's hard not to think of all that we do know about intelligence. Plenty is unknown, but we don't know how much more needs to be known to build AI. It could be a little, it could be a lot -- we have to keep experimenting and trying to build general AI.

Now, once we understand intelligence, and if (and I think this is a big if), it can be reproduced in silicon, then the resulting AGI probably doesn’t necessarily have to look like the brain, anymore than a plane looks like a bird. But the fundamental principles still have to be addressed. And we’re just not there yet.

Yet formalisms of intelligence, like Solmonoff induction, are not particularly algorithmically complicated, just computationally expensive. Gigerenzer and colleagues have shown that many aspects of human decision making rely on "fast and frugal heuristics" that are so simple they can be described in pithy phrases like Take the Best and Take the First. Robyn Dawes has shown how improper linear models regularly outperform "expert" predictors, including medical doctors. Rather than possessing a surplus of cognitive tools for addressing problems and challenges, humans seem to just possess a surplus of overconfidence and arrogance. It is easy to invent problems that humans cannot solve without computer help. Humans are notoriously bad at paying attention to base rates, for instance, even though base rates tend to be the most epistemologically important variable in any reasoning problem. After you read about many dozens of experiments in heuristics and biases research where people embarrass themselves in spectacular fashion, you start to roll your eyes a bit more when people gloat about the primacy of human reasoning.

I correspond with lots of neuroscientists. Virtually all of them tell me that the big questions remain unanswered and will for quite some time.

I correspond with neuroscientists who believe that the brain is complex but that exponentially better tools are helping quickly elucidate many of the important questions. Regardless, AI might be a matter of computer science, not cognitive science. Have you considered that possibility?

AIXI is a thought experiment, not an AI model. It’s not even designed to operate in a world with the constraints of our physical laws.

Sure it is. AIXI is "a Bayesian optimality notion for general reinforcement learning agents", a yardstick that finite systems can compare against. It may be that the only reason our brains work at all is because they are approximations of AIXI.

My point is to recognize that the way machine intelligence operates, and will for the conceivable future, is in a manner that is complementary to human intelligence. And I’m fine with that. I’m excited by AI research. I just find it unlikely, given the restraints of physical laws as we understand them today, that an AGI can be expected in the near term, if ever.

"If ever"? You must be joking. That's like saying, "I just find it unlikely, given the restraints of physical laws as we understand them today, that a theory of the vital force that animates animate objects can be expected in the near term, if ever", or "I just find it unlikely, given the restraints of physical laws as we understand them today, that a theory of aerodynamics that can produce heavier-than-air flying machines can be expected in the near term, if ever". Why would science figure out how everything else works, but not the mind? You're setting the mind apart from everything else in nature in a semi-mystical way, in my view.

I am, however, excited at the prospect of using computers to free humans from grunt work drudgery that computers are better at, so humans can focus on the kinds of thinking that they’re good at.

To be pithy, I would argue that humans suck at all kinds of thinking, and any systems that help us approach Bayesian optimality are extremely valuable because humans are so often wrong and overconfident in many problem domains. Our overconfidence in our own reasoning even when it explicitly violates the axioms of probability theory routinely reaches comic levels. In human thinking, 1 + 1 really can equal 3. Probabilities don't add up to 100%. Events with base rates of ~0.00001%, like fatal airplane crashes, are treated as if their probabilities were thousands of times the actual value. Even the stupidest AIs have a tremendous amount to teach us.

The problem with humans is that we are programmed to violate Bayesian optimality routinely with half-assed heuristics that we inherited because they are "good enough" to keep us alive long enough to reproduce and avoid getting murdered by conspecifics. With AI, you can build a brain that is naturally Bayesian -- it wouldn't have to furrow its brow and try real hard to obey simple probability theory axioms.

Filed under: AI, singularity 7 Comments
23Jun/1110

Responding to Alex Knapp at Forbes

From Mr. Knapp's recent post:

If Stross’ objections turn out to be a problem in AI development, the “workaround” is to create generally intelligent AI that doesn’t depend on primate embodiment or adaptations. Couldn’t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

But Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I don’t mean on a different level — I mean actually different. On this point, I’d highly recommend reading Temple Grandin, who’s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

My first point is hardly magical thinking -- all of machine learning works to create learning systems that do not copy the animal learning process, which is only even known on a vague level. Does Knapp know anything about the way existing AI works? It's not based around trying to copy humans, but often around improving this abstract mathematical quality called inference. (Sometimes just around making a collection of heuristics and custom-built algorithms, but again that isn't copying humans.) Approximations Solomonoff induction works quite well on a variety of problems, regardless of the state of comparing human and machine intelligence. Many "AI would have to be exactly like humans to work, because humans are so awesome, so there" proponents, like Knapp and Stross, talk as if Solomonoff induction doesn't exist.

Answering how much or how little of the human brain is known is quite a subjective question. The MIT Encyclopedia of Cognitive Sciences is over 1,000 pages and full of information about how the brain works. The Bayesian Brain is another tome that discusses how the brain works, mathematically:

A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.

After an overview of the mathematical concepts, including Bayes' theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.

The fundamentals of how the brain works, as far as I see, are known, not unknown. We know that neurons fire in Bayesian patterns in response to external stimuli and internal connection weights. We know the brain is divided up into functional modules, and have a quite detailed understanding of certain modules, like the visual cortex. We know enough about the hippocampus in animals that scientists have recreated a part of it to restore rat memory.

Intelligence is a type of functionality, like the ability to take long jumps, but far more complicated. It's not mystically different than any other form of complex specialized behavior -- it's still based around noisy neural firing patterns in the brain. To say that we have to exactly copy a human brain to produce true intelligence, if that is what Knapp and Stross are thinking, is anthropocentric in the extreme. Did we need to copy a bird to produce flight? Did we need to copy a fish to produce a submarine? Did we need to copy a horse to produce a car? No, no, and no. Intelligence is not mystically different.

We already have a model for AI that is absolutely nothing like a human -- AIXI.

Being able to quantify the difference between human and machine intelligence would be helpful for machine learning, but I'm not sure why it would be absolutely necessary for any form of progress.

As for universal measures of intelligence, here's Shane Legg taking a stab at it:

Even if we aren't there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can't we agree that we should make responsible effort towards beneficial AI? Isn't that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn't as complicated and mystical as we had wished?

As to Anissmov’s second point, it’s definitely worth noting that computers don’t play “human-level” chess. Although computers are competitive with grandmasters, they aren’t truly intelligent in a general sense – they are, basically, chess-solving machines. And while they’re superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers.

This is true, but who cares? I didn't say they were truly intelligent in the general sense. That's what is being worked towards, though.

Now, I don’t doubt that computers are going to get better and smarter in the coming decades. But there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Moore’s Law and the simple fact that, in the realm of science, we’re only just beginning to understand what intelligence, consciousness, and sentience even are, and that’s going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think that’s going to be the case for centuries.

Let's build a computer with true intelligence first, and worry about "consciousness" and "sentience" later, then.

23Jun/1126

Forbes Blogger Alex Knapp on “What is the Likelihood of the Singularity?”

Alex Knapp over at Forbes is writing a series of blog posts around Charles Stross' recent Singularity criticisms. Knapp goes after my last post pretty enthusiastically, so check it out.

Filed under: singularity 26 Comments
22Jun/1195

Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

"Human-equivalent AI is unlikely" is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I'll explain why in the next post.) Stross might not understand that the term "human-equivalent AI" always means AI of human-equivalent general intelligence, never "exactly like a human being in every way".

If Stross' objections turn out to be a problem in AI development, the "workaround" is to create generally intelligent AI that doesn't depend on primate embodiment or adaptations.

Couldn't the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

I don't get the point of the last couple sentences. Why not just pursue general intelligence rather than "enhancements to primate evolutionary fitness", then? The concept of having "motivations of its own" seems kind of hazy. If the AI is handing me my ass in Starcraft 2, does it matter if people debate whether it has "motivations of its own"? What does "motivations of its own" even mean? Does "motivations" secretly mean "motivations of human-level complexity"?

I do have to say, this is a novel argument that Stross is forwarding. Haven't heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is "unlikely", presumably indefinitely "unlikely". In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn't find much -- mainly just Dreyfuss' What Computers Can't Do and the people who argued against Kurzweil in Are We Spiritual Machines? "Human level AI is unlikely" is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.

(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense "conscious"? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers — it's possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences ... lest we inadvertently open the door to inhumane uses of human beings as well.)

I don't think these are "showstoppers" -- there is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos.

All it takes is for some people to build a "volitional" AI and there you have it. Even if 99% of AIs are tools, there are organizations -- like the Singularity Institute -- working towards AIs that are more than tools.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

YOU want AI to be like this. WE want AIs that do "try to bootstrap [themselves]" to a "higher level". Just because you don't want it doesn't mean that we won't build it.