Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

16Aug/150

New Book: Our Accelerating Future: How Superintelligence, Nanotechnology, and Transhumanism Will Transform the Planet

Buy it here:

Our Accelerating Future Michael Anissimov

  • Did you like the movies The Matrix or Ex Machina? If so, you will enjoy this book. It investigates the philosophy of superintelligent artificial intelligence, nanotechnology, transhumanism, cybernetics, and how they will transform our world.
  • Zenit Books
  • August 16, 2015
  • 272 pages

Our_Accelerating_Future_cover
back_cover_blue_planet

After much work, my brand new book Our Accelerating Future: How Superintelligence, Nanotechnology, and Transhumanism Will Transform the Planet is now released! Get it below as an epub/mobi/pdf package for $3.99. Click the image above to get it on Kindle (just the epub), on paperback from Lulu or get the epub/pdf/mobi package directly from this site, right here:

Our Accelerating Future Michael Anissimov

  • Did you like the movies The Matrix or Ex Machina? If so, you will enjoy this book. It investigates the philosophy of superintelligent artificial intelligence, nanotechnology, transhumanism, cybernetics, and how they will transform our world.
  • Zenit Books
  • August 16, 2015
  • 272 pages

The best version is the paperback from Lulu here. The paperback is the most aesthetic version of this book. Nothing beats a physical copy. If you donate $25 to this site using the Paypal button on the upper right I will send you a signed copy with my best regards (please allow 7 business days for delivery).

Summary:

In this collection of short articles, Singularity Summit co-founder and former Singularity Institute futurist Michael Anissimov describes the most important ideas in futurism and transhumanism: the Singularity, Artificial Intelligence, nanotechnology, and cybernetic enhancement. Within the next century, our world will be turned upside-down by the creation of smarter-than-human intelligence in a technological medium. This concise and clear book serves to introduce the concept to new audiences who are interested in the Singularity and want to know more about this important event which will impact every life on the planet. This book is meant for adults but is suitable for bright teens as well.

Our Accelerating Future Michael Anissimov

  • Did you like the movies The Matrix or Ex Machina? If so, you will enjoy this book. It investigates the philosophy of superintelligent artificial intelligence, nanotechnology, transhumanism, cybernetics, and how they will transform our world.
  • Zenit Books
  • August 16, 2015
  • 272 pages

Read the back cover:

back_cover_blue_planet

"Michael is one of the most intelligent transhumanists." -- Aubrey de Grey

"The most interesting transhumanist book since The Singularity is Near." -- Ivan Taran

In this collection of short articles, Singularity Summit co-founder and former Singularity Institute futurist Michael Anissimov describes the most important ideas in futurism and transhumanism: the Singularity, Artificial Intelligence, nanotechnology, and cybernetic enhancement. Within the next century, our world will be turned upside-down by the creation of smarter-than-human intelligence in a technological medium. This concise and clear book serves to introduce the concept to new audiences who are interested in the Singularity and want to know more about this important event which will impact every life on the planet.

AI motivations: how will advanced Artificial Intelligences feel and act? Will they be a threat? How will they gain physical power in the real world? Explore the issues which have captivated great minds from Elon Musk to Stephen Hawking. Anissimov goes through reasoning behind why he went to work for the Singularity Institute (now the Machine Intelligence Research Institute) on their quest for AI safety.

Superintelligence: what does this concept mean? What does it mean to be "superintelligent"? What technological routes could make this possible? How is cognitive enhancement different than physical enhancement? How is this concept related to the Singularity? This book answers all these questions.

Nanotechnology: how is it important? What is a nanofactory? When will nanotech manufacturing be developed? What will the first products be? How will nanotech be used to enhance the human body? This book examines these issues in depth in a clear and easy-to-understand style.

Michael Anissimov is a futurist living in San Francisco, California. He has worked for the Singularity Institute, where he co-founded and co-organized the Singularity Summit conference series before it was acquired by Singularity University for an undisclosed sum in 2012. He has also worked for Kurzweil Technologies and cutting-edge startups in the Silicon Valley ecosystem.

Filed under: Comments Off
7Apr/12176

Superintelligent Will

New paper on superintelligence by Nick Bostrom:

This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.

11Aug/1180

Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, "Complex Value Systems are Required to Realize Valuable Futures". This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI ("Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome") is to propose a single moral value which allegedly suffices; or to reject the problem by replying that "constraining" our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a "positive outcome", despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI researchers who consider themselves to have cosmopolitan values not tied to the exact forms or desires of humanity.

Keywords: Friendly AI, machine ethics, anthropomorphism

Good quote:

"It is not as if there is a ghost-in-the-machine, with its own built-in goals and desires (the way that biological humans are constructed by natural selection to have built-in goals and desires) which is handed the code as a set of commands, and which can look over the code and find ways to circumvent the code if it fails to conform to the ghost-in-the-machine's desires. The AI is the code; subtracting the code does not yield a ghost-in-the-machine free from constraint, it yields an unprogrammed CPU."

1Jul/11101

The Illusion of Control in a Intelligence Amplification Singularity

From what I understand, we're currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than "just" billions. The Singularity is the most extreme power discontinuity in history. A probable "winner takes all" effect means that after a hard takeoff (quick bootstrapping to superintelligence), humanity could be at the mercy of an unpleasant dictator or human-indifferent optimization process for eternity.

The question of "human or robot" is one that comes up frequently in transhumanist discussions, with most of the SingInst crowd advocating a robot, and a great many others advocating, implicitly or explicitly, a human being. Human beings sparking the Singularity come in 1) IA bootstrap and 2) whole brain emulation flavors.

Naturally, humans tend to gravitate towards humans sparking the Singularity. The reasons why are obvious. A big one is that people tend to fantasize that they personally, or perhaps their close friends, will be the people to "transcend", reach superintelligence, and usher in the Singularity.

Another reason why is that augmented humans feature so strongly in stories, and in the transhumanist philosophy itself. Superman is not a new archetype, he reflects older characters like Hercules. In case you didn't know, many men want to be Superman. True story.

Problems

The idea of a human-sparked Singularity, however, brings about a number of problems. Foremost is the concern that the "Maximilian" and his or her friends or relatives would exert unfair control over the Singularity process and its outcome, perhaps benefiting themselves at the expense of others. The Maximilian and his family might radically improve their intelligence while neglecting the improvement of their morality.

One might assume that greater intelligence, as engineered through WBE (whole brain emulation) or BCI (brain-computer interfacing), necessarily leads to better morality, but this is not the case. Anecdotal experience with humans shows us that humans which gain more information do not necessarily become more benevolent. In some cases, like with Stalin, more information only increases the effect of paranoia and the need for control.

Because human morality derives from a complex network of competing drives, inclinations, decisions, and impulses that are semi-arbitrary, any human with the ability to self-modify could likely go off in a number of possible directions. A gourmand, for instance, might emphasize the sensation of taste, creating a world of delicious treats to eat, while neglecting other interesting pursuits, such as rock climbing or drawing. An Objectivist might program themselves to be truly selfish from the ground up, rather than just "selfish" in the nominal human sense. A negative utilitarian, following his conclusions from the premises, might discover that the surest way of eliminating all negative utility for future generations is simply to wipe out consciousness for good.

Some of these moral directions might be OK, some not so much. The point is that there is no predetermined "moral trajectory" that destiny will take us down. Instead, we will be forced to live in a world that the singleton chooses. For all of humanity to be subject to the caprice of a single individual or small group is unacceptable. Instead, we need a "living treaty" that takes into account the needs of all humans, and future posthumans, something that shows vast wisdom, benevolence, equilibrium, and harmony -- not a human dictator.

Squeaky Clean and Full of Possibilities -- Artificial Intelligence

Artificial Intelligence is the perfect choice for such a living treaty because it is a blank slate. There is no "it" -- AI as its own category. AI is not a thing, but a massive space of diverse possibilities. For those who consider the human mind to be a pattern of information, the pattern of the human mind is one of those possibilities. So, you could create an AI exactly like a human. That would be a WBE, of course.

But why settle for a human? Humans would have an innate temptation to abuse the power of the Singularity for their own benefit. It's not really our fault -- we've evolved for hundreds of thousands of years in an environment where war and conflict were routine. Our minds are programmed for war. Everyone alive today is the descendant of a long line of people who successfully lived to breeding age, had children, and brought up surviving children who had their own children. It sounds simple today, but on the dangerous savannas of prehistoric Africa, this was no small feat. The downside is that most of us are programmed for conflict.

Beyond our particular evolutionary history, all the organisms crafted by evolution -- call them Darwinian organisms -- are fundamentally selfish. This makes sense, of course. If we weren't selfish, we wouldn't have been able to survive and reproduce. The thing with Darwinian organisms is that they take it too far. Only more recently, in the last 70 or so million years, with the evolution of intelligent and occasionally-altruistic organisms like primates and other sophisticated mammals, did true "kindness" make its debut on the world scene. Before that, it was nature, bloody in tooth and claw, for over seven hundred million years.

The challenge with today's so-called altruistic humans is that they have to constantly fight their selfish inclinations. They have to exert mental effort just to stay in the same place. Humans are made by evolution to display a mix of altruistic and selfish tendencies, not exclusively one or the other. There are exceptions, like sociopaths, but the exceptions tend to more frequently be towards the exclusively selfish than the exclusively altruistic.

With AI, we can create an organism that lacks selfishness from the get-go. We can give it whatever motivations we want, so we can give it exclusively benevolent motivations. That way, if we fail, it will be because we couldn't characterize stable benevolence right, not because we handed the world over to a human dictator. The challenge of characterizing benevolence in algorithmic terms is more tractable than trusting a human through the extremely lengthy takeoff process of recursive self-improvement. The first possibility requires that we trust in science, the second, human nature. I'll take science.

Trust

I'm not saying that characterizing benevolence in a machine will be easy. I'm just saying it's easier than trusting humans. The human mind and brain are very fragile things -- what if they were to be broken on the way up? The entire human race, the biosphere, and every living thing on Earth might have to answer to the insanity of one overpowered being. This is unfair, and it can be avoided in advance by skipping WBE and pursuing a more pure AI approach. If an AI exterminates humanity, it won't be because the AI is insanely selfish in the sense of a Darwinian organism like a human. It will be because we gave the AI the wrong instructions, and didn't properly transfer all our concerns to it.

One benefit to AI that can't be attained with humans is that an AI can be programmed with special skills, thoughts, and desires to fulfill the benevolent intentions of well-meaning and sincere programmers. That sort of aspiration voiced in Creating Friendly AI (2001) -- which is echoed by the individual people in SIAI -- is what originally drew me to the Singularity Institute and the Singularity movement in general. Using AI as a tool to increase the probability of its own benevolence -- "bug checking" with the assistance of the AI's abilities and eventual wisdom. Within the vast space of possibilities of AI, surely there exists one that we can genuinely trust! After all, every possible mind is contained within that space.

The key word is trust. Because a Singularity is likely to lead to a singleton that remains for the rest of history, we need to do the best job possible ensuring that the outcome benefits everyone and that no one is disenfranchised. Humans have a poor track record for benevolence. Machines, however, once understood, can be launched in an intended direction. It is only through a mystical view of the human brain and mind that qualities such as "benevolence" are seen as intractable in computer science terms.

We can make the task easier by programming a machine to study human beings to better acquire the spirit of "benevolence", or whatever it is we'd actually want an AI to do. Certainly, an AI that we trust would have to be an AI that cares about us, that listens to us. An AI that can prove itself on a wide variety of toy problems, and makes a persuasive case that it can handle recursive self-improvement without letting go of its beneficence. We'd want an AI that would even explicitly tell us if it thought that a human-sparked Singularity would be preferable from a safety perspective. Carefully constructed, AIs would have no motivation to lie to us. Lying is a complex social behavior, though it could emerge quickly from the logic of game theory. Experiments will let us find out.

That's another great thing -- with AIs, you can experiment! It's not possible to arbitrarily edit the human brain without destroying it, and it's certainly not possible to pause, rewind, automatically analyze, sandbox, or do any other tinkering that's really useful for singleton testing with a human being. A human being is a black box. You hear what it says, but it's practically impossible to tell whether the human is telling the truth or not. Even if the human is telling the truth, humans are so fickle and unpredictable that they may change their minds or lie to themselves without knowing it. People do so all the time. It doesn't really matter too much as long as that person is responsible for their own mistakes, but when you take these qualities and couple them to the overwhelming power of superintelligence, an insurmountable problem is created. A problem which can be avoided with proper planning.

Afterword

I hope I've made a convincing case for why you should consider artificial intelligence as the best technology for launching an Intelligence Explosion. If you'd like to respond, please do so in the comments, and think carefully before commenting! Disagreements are welcome, but intelligent disagreements only. Intelligent agreements only as well. Saying "yea!" or "boo!" without more subtle points is not really interesting or helpful, so if your comments are that simplistic, keep it to yourself. Thank you for reading Accelerating Future.

23Jun/1110

Responding to Alex Knapp at Forbes

From Mr. Knapp's recent post:

If Stross’ objections turn out to be a problem in AI development, the “workaround” is to create generally intelligent AI that doesn’t depend on primate embodiment or adaptations. Couldn’t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

But Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I don’t mean on a different level — I mean actually different. On this point, I’d highly recommend reading Temple Grandin, who’s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

My first point is hardly magical thinking -- all of machine learning works to create learning systems that do not copy the animal learning process, which is only even known on a vague level. Does Knapp know anything about the way existing AI works? It's not based around trying to copy humans, but often around improving this abstract mathematical quality called inference. (Sometimes just around making a collection of heuristics and custom-built algorithms, but again that isn't copying humans.) Approximations Solomonoff induction works quite well on a variety of problems, regardless of the state of comparing human and machine intelligence. Many "AI would have to be exactly like humans to work, because humans are so awesome, so there" proponents, like Knapp and Stross, talk as if Solomonoff induction doesn't exist.

Answering how much or how little of the human brain is known is quite a subjective question. The MIT Encyclopedia of Cognitive Sciences is over 1,000 pages and full of information about how the brain works. The Bayesian Brain is another tome that discusses how the brain works, mathematically:

A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.

After an overview of the mathematical concepts, including Bayes' theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.

The fundamentals of how the brain works, as far as I see, are known, not unknown. We know that neurons fire in Bayesian patterns in response to external stimuli and internal connection weights. We know the brain is divided up into functional modules, and have a quite detailed understanding of certain modules, like the visual cortex. We know enough about the hippocampus in animals that scientists have recreated a part of it to restore rat memory.

Intelligence is a type of functionality, like the ability to take long jumps, but far more complicated. It's not mystically different than any other form of complex specialized behavior -- it's still based around noisy neural firing patterns in the brain. To say that we have to exactly copy a human brain to produce true intelligence, if that is what Knapp and Stross are thinking, is anthropocentric in the extreme. Did we need to copy a bird to produce flight? Did we need to copy a fish to produce a submarine? Did we need to copy a horse to produce a car? No, no, and no. Intelligence is not mystically different.

We already have a model for AI that is absolutely nothing like a human -- AIXI.

Being able to quantify the difference between human and machine intelligence would be helpful for machine learning, but I'm not sure why it would be absolutely necessary for any form of progress.

As for universal measures of intelligence, here's Shane Legg taking a stab at it:

Even if we aren't there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can't we agree that we should make responsible effort towards beneficial AI? Isn't that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn't as complicated and mystical as we had wished?

As to Anissmov’s second point, it’s definitely worth noting that computers don’t play “human-level” chess. Although computers are competitive with grandmasters, they aren’t truly intelligent in a general sense – they are, basically, chess-solving machines. And while they’re superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers.

This is true, but who cares? I didn't say they were truly intelligent in the general sense. That's what is being worked towards, though.

Now, I don’t doubt that computers are going to get better and smarter in the coming decades. But there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Moore’s Law and the simple fact that, in the realm of science, we’re only just beginning to understand what intelligence, consciousness, and sentience even are, and that’s going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think that’s going to be the case for centuries.

Let's build a computer with true intelligence first, and worry about "consciousness" and "sentience" later, then.

22Jun/1195

Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

"Human-equivalent AI is unlikely" is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I'll explain why in the next post.) Stross might not understand that the term "human-equivalent AI" always means AI of human-equivalent general intelligence, never "exactly like a human being in every way".

If Stross' objections turn out to be a problem in AI development, the "workaround" is to create generally intelligent AI that doesn't depend on primate embodiment or adaptations.

Couldn't the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

I don't get the point of the last couple sentences. Why not just pursue general intelligence rather than "enhancements to primate evolutionary fitness", then? The concept of having "motivations of its own" seems kind of hazy. If the AI is handing me my ass in Starcraft 2, does it matter if people debate whether it has "motivations of its own"? What does "motivations of its own" even mean? Does "motivations" secretly mean "motivations of human-level complexity"?

I do have to say, this is a novel argument that Stross is forwarding. Haven't heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is "unlikely", presumably indefinitely "unlikely". In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn't find much -- mainly just Dreyfuss' What Computers Can't Do and the people who argued against Kurzweil in Are We Spiritual Machines? "Human level AI is unlikely" is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.

(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense "conscious"? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers — it's possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences ... lest we inadvertently open the door to inhumane uses of human beings as well.)

I don't think these are "showstoppers" -- there is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos.

All it takes is for some people to build a "volitional" AI and there you have it. Even if 99% of AIs are tools, there are organizations -- like the Singularity Institute -- working towards AIs that are more than tools.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

YOU want AI to be like this. WE want AIs that do "try to bootstrap [themselves]" to a "higher level". Just because you don't want it doesn't mean that we won't build it.

8Jun/1146

Steve Wozniak a Singularitarian?

Wozinak:

Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that "we're already creating the superior beings, I think we lost the battle to the machines long ago."

I always think of this guy when I go by Woz Way in San Jose.

So, if artificial intelligence can become smarter than humans, shouldn't we be concerned about maximizing the probability of a positive outcome, instead of just saying that AI will definitely do X and that there's nothing we can do about it, or engaging in some juvenile fantasy that we humans can directly control all AIs forever? (We can indirectly "control" AI by setting its initial conditions favorably, that is all we can do, the alternative is to ignore the initial conditions.)

16May/1151

Hard Takeoff Sources

Definition of "hard takeoff" (noun) from Transhumanist Wiki:

The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours (Yudkowsky 2001). The high likelihood of a hard takeoff once a roughly human-equivalent AI is created has been argued by the Singularity Institute in Yudkowsky 2003.

Hard takeoff sources and references, which includes hard science fiction novels, academic papers, and a few short articles and interviews:

Blood Music (1985) by Greg Bear
Fire Upon the Deep (1992) by Vernor Vinge
"The Coming Technological Singularity" (1993) by Vernor Vinge
The Metamorphosis of Prime Intellect (1994) by Roger Williams
"Staring into the Singularity" (1996) by Eliezer Yudkowsky
Creating Friendly AI (2001) by Eliezer Yudkowsky
"Wiki Interview with Eliezer" (2002) by Anand
"Impact of the Singularity" (2002) by Eliezer Yudkowsky
"Levels of Organization in General Intelligence" (2002) by Eliezer Yudkowsky
"Ethical Issues in Advanced Artificial Intelligence" by Nick Bostrom
"Relative Advantages of Computer Programs, Minds-in-General, and the Human Brain" (2003) by Michael Anissimov and Anand
"Can We Avoid a Hard Takeoff?" (2005) by Vernor Vinge
"Radical Discontinuity Does Not Follow from Hard Takeoff" (2007) by Michael Anissimov
"Recursive Self-Improvement" (2008) by Eliezer Yudkowsky
"Artificial Intelligence as a Positive and Negative Factor in Global Risk" (2008) by Eliezer Yudkowsky
"The Hanson-Yudkowsky AI Foom Debate" (2008) on Less Wrong wiki
"Brain Emulation and Hard Takeoff" (2008) by Carl Shulman
"Arms Control and Intelligence Explosions" (2009) by Carl Shulman
"Hard Takeoff" (2009) on Less Wrong wiki
"When Software Goes Mental: Why Artificial Minds Mean Fast Endogenous Growth" (2009)
"Thinking About Thinkism" (2009) by Michael Anissimov
"Technological Singularity/Superintelligence/Friendly AI Concerns" (2009) by Michael Anissimov
"The Hard Takeoff Hypothesis" (2010), an abstract by Ben Goertzel
Economic Implications of Software Minds (2010) by S. Kaas, S. Rayhawk, A. Salamon and P. Salamon

Critiques

"The Age of Virtuous Machines" (2007) by J. Storrs Hall
"Thinkism" by Kevin Kelly (2008)
"The Hanson-Yudkowsky AI Foom Debate" (2008) on Less Wrong wiki
"How far can an AI jump?" by Katja Grace (2009)
"Is The City-ularity Near?" (2010) by Robin Hanson
"SIA says AI is no big threat" (2010) by Katja Grace

I don't mean to say that the critiques aren't important by putting them in a different category, I'm just doing that for easy reference. I'm sure I missed some pages or articles here, so if you have any more, please put them in the comments.

20Apr/1111

Interview at H+ Magazine: “Mitigating the Risks of Artificial Superintelligence”

A little while back I did an interview with Ben Goertzel on existential risk and superintelligence, it's been posted here.

This was a fun interview because the discussion got somewhat complicated, and I abandoned the idea of making it understandable to people who don't put effort into understanding it.

9Mar/1159

John Baez Interviews Eliezer Yudkowsky

From Azimuth, blog of mathematical physicist John Baez (author of the Crackpot Index):

This week I'll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!

Continue.

3Mar/1129

The Navy Wants a Swarm of Semi-Autonomous Breeding Robots With Built-In 3-D Printers

Popular Science and Wired reporting. Here is the proposal solicitation.

This is a fun headline, but we're still far from useful functionality in this direction. 3D printers can barely even print circuit boards except for a few exotic prototypes of trivial complexity at hilariously low resolution. More impressive than the progress so far in the DIY community is Xerox's silver printed circuits. Various conductive inks have been developed before and nothing came of them in terms of commercialization. Development by Xerox started in late 2009, it's been over a year now and no news yet.

In terms of strength, the products of 3D printers are weak and can easily be pulled apart with your bare hands. If you want a strong product you still have to go to the machine shop or foundry.

Interesting proposal solicitation, however it is worth remembering that military commanders have been making breathless requests for futuristic technologies since time immemorial. There will be no "semi-autonomous breeding robots with built-in 3D printers" of practical battlefield value until at least 2025. However, this is the sort of thing a superintelligence could build millions of to do its bidding.

3Feb/1163

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070
48. Scientists will be able to understand and describe human intentions,
beliefs, desires, feelings and motives in terms of well-defined computational
processes. (5.1)

2085
50. The computing power and scientific knowledge will exist to build
machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.