Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

15Aug/08131

Subtle Nuances

Protip: clicking on the image takes you to the first three chapters of Jaynes' book.

Filed under: AI 131 Comments
14Aug/0890

Will the Real AI Critics Please Stand Up?

I'm having great trouble finding any citeable work that argues that artificial intelligence is completely impossible. People throw kiwis at AI theory in its current state, or the philosophy of functionalism, but every single argument I can find stops short of outright denunciation.

For instance, Gerald Edelman, winner of the 1972 Nobel Prize in Medicine and coiner of the term "Neural Darwinism", argues that "AI" is impossible, expelling much hot air on the subject, but then it turns out that he believes, "It seems reasonably feasible that, in the future, once neuroscientists learn much more about consciousness and its mechanism, why not imitate it?", and remarks "We construct what we call brain-based devices, or BBDs, which I think will be increasingly useful in understanding how the brain works and modeling the brain. But it also may be the beginning of the design of truly intelligent machines." So that's not very anti-AI. Edelman was also quoted in John Horgan's recent anti-Singularity piece in IEEE Spectrum, the "Consciousness Conundrum", in support of the idea that AI is difficult. But if he thinks AI is so difficult, why is he spending time and money on brain-based devices, which are steps towards AI?

According to his Wikipedia article, Hurbert Dreyfuss, author of "What Computers Can't Do: the Limits of Artificial Intelligence", argues "that we cannot know (and never will) be able to understand our own behavior in the same way as we understand objects in, for example, physics or chemistry: that is, by considering ourselves as things whose behaviour can be predicted via 'objective', context free scientific laws." But then the article also states, "he doesn't believe that AI is fundamentally impossible; only that the current research program is fatally flawed. Instead he argues that to get a device (or devices) with human-like intelligence would require them to have a human-like being in the world, which would require them to have bodies more or less like ours, and social acculturation (i.e. a society) more or less like ours."

Very confusing, but I'm not done yet. Next comes famous physicist and Hawking-collaborator Roger Penrose and his poorly thought out theories on consciousness. Penrose argues that quantum decoherence in neural macrotubules is essential to our intelligence and consciousness. This was decisively refuted by our friend Max Tegmark in 2000, who calculated that the timescale of neuron firing and excitations in microtubules is slower than the decoherence time by a factor of at least 10,000,000,000. Still, although Penrose fusses about the alleged non-algorithmic nature of intelligence throughout his books on the topic, according to a review by Robin Hanson, "Penrose grants that we may be able to artificially construct conscious intelligence, and "such objects could succeed in actually superseding human beings." But he thinks "algorithmic computers are doomed to subservience." Another thinker who objects to the mainline AI philosophy and approach but doesn't actually believe that AI will never be possible if we aren't creative enough.

There's more stuff out there. Paul Churchland says, "Classical AI is unlikely to yield conscious machines; systems that mimic the brain might". More of the same. Copy a certain type of big-headed ape exactly, and intelligence will pop out, but if you try anything else, you'll fail. Even Searle, the king of AI criticism, acknowledges that "machines with internal causal powers equivalent to those of brains" could think. I'm not sure precisely what he means by this, but by bothering to say something besides humans, even Searle seems to believe that some form of Artificial Intelligence is possible.

Where are the people saying "AI will never happen" or "only human beings can think"? I can find hundreds of references made by laypeople on various forums, but they generally don't present coherent arguments, they just throw out their opinions.

If no philosopher, cognitive scientist, or computer scientist is willing to claim in public that true AI is impossible, then isn't this an important finding in and of itself? If it is, then I totally get the credit.

Filed under: AI 90 Comments
14Aug/0823

Why Human-Level AI Won’t Change the World

One position I have difficulty wrapping my head around is the position held by those who believe that human-level AI is possible but that it would lack the capability to quickly change the world. The reasons for why AI would likely have that capability are frequently cited. To summarize just a few:

1) AI could quickly and easily be copied as many times as is computationally feasible.

2) Running on a flexible substrate, AI could "overclock" their cognitive functions, leading to enhanced intelligence and capability.

3) Though robotics today is still maturing, it will be more sophisticated by the time AI arrives, and with AI's help, it isn't unreasonable to assume that AIs will have direct and broad access to the physical world through robotic means.

4) AIs would be able to share thoughts almost instantly, meaning that skills learned by one AI could be transferred to all other AIs very quickly.

5) AIs would be able to quickly and automatically perform tasks considered by humans to be "extremely boring", but still pragmatically useful.

6) AIs could routinely perform intellectually demanding tasks for just the cost of the computer it runs on, plus electricity.

So, brainstorming the reasons why human-level AI would exist but lack the capability to quickly change the world:

1) Human-level AI might possess human skills and intelligence but lack free will, making them incapable of modifying the world in any real sense.

2) Humans will deliberately prevent AI from doing so.

3) AIs would need to be embodied to do anything, and there currently isn't enough room on the planet for that many embodied AIs or the infrastructure to support the resources they would consume.

4) I object to the idea of human-level AIs in general, thus when the prospect of such AIs changing the world is brought up, I object to its feasibility, while concealing that I reject the premise outright.

5) Humans are equivalent to the most intelligent entity possible, therefore AIs will never be smarter than humans, and will lack any huge impact. (Sometimes this is phrased as saying that humans and AIs are both Turing complete and will thus have the same capabilities.)

6) AIs will just exist on the virtual layer, and being virtual beings, will always have highly limited access to the physical layer.

Any others I'm missing? If there are any actual papers with people presenting points in this vein, that would be ideal.

Filed under: AI, singularity 23 Comments
14Aug/0817

Missing: Robot Ethics Charter

Researching the current state of "roboethics" (a lame term that marginalizes "AI ethics", a more-relevant superset of roboethics), I find a bunch of references to a South Korean project to draft a Robot Ethics Charter. All these references occur in March 2007, and they promised the ethics charter would be released in April 2007 and subsequently adopted by the government. However, I can't find it anywhere. Anyone have a clue about where it went? One article summarized the effort as follows:

The prospect of intelligent robots serving the general public brings up an unprecedented question of how robots and humans should be expected to treat each other. South Korea's Ministry of Commerce, Industry and Energy has decided that a written code of ethics is in order.

Starting last November, a team of five members, including a science-fiction writer, have been drafting a Robot Ethics Charter to address and prevent "robot abuse of humans and human abuse of robots." Some of the sensitive subject areas covered in the charter include human addiction to robots, humans treating robots like a spouse, and prohibiting robots from ever hurting a human.

Critics of the charter say that the charter is premature and may not have a practical application once robots are really an integral part of society. Says Mark Tilden, the designer of the toy RoboSapien, "From experience, the problem is that giving robots morals is like teaching an ant to yodel. We're not there yet, and as many of Asimov's stories show, the conundrums robots and humans would face would result in more tragedy than utility."

"Asimov" refers to science-fiction author Isaac Asimov, who created a robot code of ethics for one of his stories. His Three Rules were: (1) a robot could not hurt a human or through inaction allow a human to be harmed, (2) a robot must obey human orders unless those orders would make it violate rule number one, and (3) a robot must protect itself unless that protection would violate the first two rules. These apparently served as inspiration for the South Korean Robot Ethics Charter.

However, South Korea's Ministry of Information and Communication plans to have a robot in every household by 2020. "Personally, I wish to accomplish that objective by 2010," said Oh Sang Rok, head of the ministry's project.

Personally, I think Asimov's Three Laws are a terrible inspiration for any roboethics code. The laws were created to be used as a plot device. When they disintegrated, a story came out of it. Unfortunately, they've actually been taken seriously as a possible solution to the problem of human-unfriendly robots and AI for many decades now. But Asimov himself said, "There was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the 61 words of the Three Laws."

Back in summer 2004, the Singularity Institute launched a website project, "Three Laws Unsafe", a critique of Asimov's Laws riding on the publicity of the "I, Robot" movie. Check out the articles section, which includes a submission by myself.

But yeah, anyone know where that Robot Ethics Charter is, or the names of anyone who was working on it? We need to get our magnifying glasses out and scrutinize that shit.

Filed under: AI, robotics 17 Comments
13Aug/086

Two Papers You Should Read

Some of you may have seen these papers already, as I mention them frequently, but they'll important enough that I like to re-mention them regularly. They're "Artificial Intelligence as a Positive and Negative Factor in Global Risk" by Eliezer Yudkowsky and "The Basic AI Drives" by Steve Omohundro. The papers are 42 and 11 pages, respectively. There's no abstract for the first paper, but here's the abstract for the second:

"One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity."

Feel free to post your reactions here.

Filed under: AI 6 Comments
13Aug/0811

Conscious Thought Leads to Better Decisions

From Eurekalert, a press release titled, "Complex decision? Don't sleep on it":

Neither snap judgements nor sleeping on a problem are any better than conscious thinking for making complex decisions, according to new research.

The finding debunks a controversial 2006 research result asserting that unconscious thought is superior for complex decisions, such as buying a house or car. If anything, the new study suggests that conscious thought leads to better choices.

Since its publication two years ago by a Dutch research team in the journal Science, the earlier finding had been used to encourage decision-makers to make "snap" decisions (for example, in the best-selling book Blink, by Malcolm Gladwell) or to leave complex choices to the powers of unconscious thought ("Sleep on it", Dijksterhuis et al., Science, 2006).

At stake in these conscious/unconscious thought experiments (literally) is a wider philosophical argument about the value of intuition and hunches. We want to think that hunches produce better decisions, and have been taught since we were children that this is an intelligent way to approach reality ("Use the Force, Luke".) However, it ain't so. Though hunches may be useful for simple decisions, like when to swing a bat to hit a ball, conscious thought appears to be superior for complex decisions, the ones that really matter.

It appears that the mysteriousness of unconscious thought may be part of its appeal. However, I find that conscious thought can be as mysterious as unconscious thought. Underlying every conscious thought is a bedrock of unconscious beliefs and assumptions. Only through deliberate questioning can we methodically dig up these beliefs and question them for accuracy and relevance. Without regular housekeeping, things can get pretty messy down there. The great project of analyzing our beliefs with conscious thought is far more interesting than the plug-and-play autonomicity and quick fix of unconscious thought.

Some arguments for the infeasibility of AI rest on the supposed mysteriousness and power of unconscious thought. But as I mention, conscious thoughts rest on unconscious ones, so this mysteriousness and power are still retained in consciousness. All that aside, cognition is way less mysterious than it was a few decades ago, and now we know a tremendous amount about the mind. It's only a matter of time before its structure becomes understood, just like our place in the cosmos, interactions between chemicals, the behavior of electromagnetic fields, and thousands of other phenomena that were once baffling but are now taught in High School.

Of course, investigating the structure of thought in greater detail and coming to understand it may frustrate people like Douglas Hofstadter, who would lose respect for humanity if we come to learn too much about ourselves too soon. According to Hofstadter, reaching the goal of AI in a few decades would make him "fear that our minds and souls were not deep".

Such spiritualistic language in reference to the human mind only discourages level-headed research and objective question-asking.

Filed under: philosophy 11 Comments
13Aug/0872

Interview with Dr. Steel

Dr. Phineas Waldolf Steel is a mentally twisted but awe-inspiring figure whose interests span the production of propaganda, the construction of chronically malfunctioning robots, puppet shows, and an ongoing attempt to become World Emperor for the purpose of turning this planet into a Utopian Playland. His growing movement aims to move beyond conflict and war to forge a world that makes fun the top priority. His multi-faceted persona is an example of what people can do when they are highly creative in a variety of cultural areas.

Dr. Steel is an entertainer, leader, musician, artist, and thinker. If you're interested in finding out more about him, check out his website. The laboratory in the toyland section is particularly entertaining. As an independent artist, Dr. Steel hasn't sold out to any record companies yet, though I'm sure that he would accept a big contract if it were part of his master plan for world domination. His music has been described as "hip-hop industrial opera", which is correct in the abstract, though I'd also add "experimental". Imagine nerdcore intermixed with assorted sampling, nutty beatboxing, guitars, reed instruments, choirs, and an accordion.

Dr. Steel is no stranger to transhumanism, as you'll see in our interview. He has written songs called "Build the Robots" and "The Singularity" that serve as his odes to all things robotic and post-biological. You can get the mp3s from his latest album here, or all three here. Dr. Steel is one among a growing group of transhumanist musicians and artists.

In a world of pre-packaged, corporatized, formula-driven entertainment, I find Dr. Steel to be refreshingly different and rebellious. I was so intrigued by his self-presentation and approach to the world that I had to sit down and ask him a few questions. They begin below the image.

Accelerating Future (AF): Dr. Steel, have you heard of transhumanism? Do you consider yourself a transhumanist?

STEEL: Absolutely, I do consider myself a transhumanist. The desire to transcend biology, as Ray Kurzweil is known for saying, remains at the forefront of my consciousness. I am frequently frustrated with the limitations of my current, physical form and I foresee great possibilities as we evolve into electronic life.

AF: Some of us think that the fastest way to get really useful robots is to build an artificial intelligence that designs these robots for us. Given your checkered past in robot development, have you considered this option?

STEEL: Indeed, there will come a time when artificial super intelligence will be able to out-think and out-perform us. To see “the robots building robots building robots” is one way of refining the evolution and development of technology based life. However, I am most interested in integrating our consciousness into this technology. To be able to back-up one's brain and utilize this as the basis of such creations will allow us to integrate ourselves into the next step of existence. Our creativity is our greatest power, and in fact this is what I believe the true purpose of the universe is; to create. Humans have been able to harness this ability in unique ways and to build upon that by upgrading ourselves will be the key to moving into a new field of infinite possibilities.

AF: In your video of robotics you mention nanotechnology. If you could use advanced futuristic nanobots for one application, what would it be?

STEEL: Oh goodness, there are so many possibilities. Though, for the fun factor I would have to go with the use of foglets. These clusters of nanobots programmed to manifest in their designated form could prove very useful indeed. Be it the replication of a Tyrannosaurus Rex or a very comfortable arm chair, foglets could provide hours of maniacal entertainment.

AF: One of the biggest challenges of space travel would be isolation from the bulk of society and absence of an Internet connection. What would you do to amuse yourself on a long journey through space?

STEEL: It's interesting, what you describe as a challenge reads a bit like a vacation in my book. I would, however, require a great deal of reading and writing material. Such a journey would certainly give me the time to complete my illustrated manifesto….oh, and I would need an accordion as well.

AF: Even among those of us that are obsessed with the amazing potential of artificial intelligence and the coming Singularity, we are concerned about mankind being destroyed by AIs gone wrong. Is there anything we can do to avoid this negative outcome?

STEEL: It seems to me that if mankind successfully creates something that ends up wiping out the entire species, then we deserve such a demise. There is always a way to overcome a problem and it is this sort of creative thinking that makes us so very special. If we are not up to the task, then evolution has passed us by and electronic life would then inherit the Earth. It's also important to remember that when being chased by a robot, it's best to keep a garden hose and a bucket of magnets handy.

AF: With achievements in art, video, music, philosophy, and complete insanity, you have shattered the traditional boundaries of creativity, expression, and existence in general. Is there any way for us to become as awesome as you, Dr. Steel?

STEEL: Why thank you ever so much for the tremendous compliment. I must assure you, however, that I am but a simple carbon life form. Until I am able to transcend biology, I hold no more potential than any other human on the planet. Let us all reach beyond our grasp to obtain the title of “awesome”.

13Aug/0833

Funding Secured for Diamondoid Mechanosynthesis Research

Finally, some serious research will experimentally explore the possibility of diamondoid mechanosynthesis (DMS). This research will be conducted in the UK. Here's the first paragraph of the press release:

Professor Philip Moriarty of the Nanoscience Group in the School of Physics at the University of Nottingham (U.K.) has been awarded a five-year £1.53M ($3M) grant by the U.K. Engineering and Physical Sciences Research Council (EPSRC) to perform a series of laboratory experiments designed to investigate the possibility of diamond mechanosynthesis (DMS). DMS is a proposed method for building diamond nanostructures, atom by atom, using the techniques of scanning probe microscopy under ultra-high vacuum conditions. Moriarty’s project, titled “Digital Matter? Towards Mechanised Mechanosynthesis,” was funded under the Leadership Fellowship program of EPSRC. Moriarty’s experiments begin in October 2008.

If reliable DMS is possible, it could eventually lead to full-fledged molecular nanotechnology, which would have diverse applications, many of them dangerous. Advocates of MNT traditionally overestimate the probability of MNT being possible at all while underestimating the negative applications of the technology.

I've been following the Foresight Institute, the leading molecular nanotechnology-oriented non-profit, for many years now. Looking back, I feel disappointed at the lack of emphasis on the dangers of MNT in the organization's message and online material. I call on the Foresight Institute to focus more on the potential downsides of molecular nanotechnology. (Note: the Foresight Institute's President, Christine Peterson, has indicated in the comments that there is indeed be a new policy focus on the topic of addressing potential downsides, through an initiative called Open Source Physical Security. She spoke about this at the 2007 Singularity Summit.)

Back to the research at hand. Here's a summary of what's been happening. For the last few years, Rob Freitas and Ralph Merkle have been putting together a minimal toolset for DMS. The press release describes this as a "comprehensive three-year project to computationally analyze a complete set of DMS reaction sequences and an associated minimal set of tooltips that could be used to build basic diamond and graphene (e.g., carbon nanotube) structures." Now, Philip Moriarty, along with one postdoc and four PhD students, will experimentally test many of the predictions presented by this study.

This research will have huge ramifications for the future of manufacturing and medical technology, whether it succeeds or not. Many of the most interesting cybernetics technologies would require atomically precise manufacturing to be implemented successfully. It's uncertain how we might get atomically precise manufacturing, but DMS is one possible route. Synthetic biology is another. If this research reveals that DMS is harder than the advocates think, then synthetic biology may start receiving more attention as a general-purpose manufacturing approach.

9Aug/0816

What is the Singularity?

The Singularity has nothing to do with the acceleration of technological progress. It is only somewhat related to interdisciplinary convergence. The universe is not specially structured for the Singularity to happen. History has not been particularly leading up to it, except in the sense that inventing new technologies gets easier when civilization has more advanced building tools and knowledge. The Singularity is the creation of smarter-than-human intelligence, nothing less, and nothing more.

The Singularity is not a belief system. It is a likely (but by no means certain) future event with great potential for good and for ill. Sort of like nuclear technology, if nuclear technology could invent more advanced technologies on its own and have independent goals. Kind of scary, really.

The Singularity is a hurdle for the human species to jump, not a stairway to Heaven. It could fairly easily be avoided or delayed, either by blowing up most of the major cities, detonating H-bombs in the upper atmosphere (EMP), someone taking over the world, etc.

The Singularity is not mystical because intelligence is not mystical. The Singularity is just the development of a new type of intelligence. Intelligence operates according to the laws of physics and other rules, just like everything else. It's not magic, though intelligence can sometimes seem like magic when it's greater than our own.

Intelligence is what leads to people like Leonardo da Vinci and Albert Einstein, as well as miracle of human intelligence in general. Remember that every so-called "genius" is still firmly within the bounds of the natural variation of the human species. And our species is more uniform that most. After all, we went through a population bottleneck around 70,000 years ago. Maybe if we were more genetically diverse or went through even more serious challenges as we were evolving (perhaps more vicious, intelligent predators that didn't fall from simple weapons like spears?), then we'd be way smarter than we are now. If that's how history happened, our greater intelligence wouldn't seem "special" -- it'd just be the way things were.

Rather than looking at the Singularity as the culmination of complexity in the universe since the Big Bang, a highly dubious proposition, I look at it as a temporary thing we have to deal with before we can lay back and relax. A single intelligent species on a planet is not a stable state. It's only a matter of time before an intelligent species (like humans) finds out the principles underlying its own intelligence and exploits them to create new variants of itself. In the wider multiverse, this has probably already happened countless times.

Some people say, "you can't engineer intelligence -- it's too mysterious". These are the same people that said life was animated by élan vital, that organic chemicals could not be synthesized from inorganic precursors, that the Earth was the center of the universe, and so on. The Bible or a tendency to pat yourself on the back may have taught you that the principles of intelligence are unbelievably complex or subtle, but that's how most things we don't understand often seem. Mysteriousness is cool, and if intelligence doesn't have mysteriousness, then how can it be cool?

Others say, "human minds are Turing-complete machines, so any other type of mind will have similar capabilities to our own". This is self-congratulatory conceit. Just because two machines are Turing complete does not mean that they can extract statistical regularities from sensory data and arrange them into concepts, inferences and decisions with equal ability. Depending on disparities in the knowledge base and processing structure of the mind, the amount of time it takes to learn something can vary by many orders of magnitude. It appears there are some things certain people just can't learn. Animals can't learn much that humans find simple, even though they obviously have some form of intelligence.

In the same way that someone of average intelligence will never be able to make contributions to the cutting edge of particle physics, we humans will never be able to achieve certain feats with our limited brains. Instead of crying about it or going into a state of denial, we need to come up with a theory of intelligence and use it to boost our own, as well as instantiate intelligence in a nonbiological medium.

It is a mistake to think that the intelligence we create will be on our side automatically, for instance by integrating ourselves sufficiently close with it, or by trusting that wisdom is inextricably connected to intelligence. This is optimistic fantasy. It makes a nice story, but the reality -- that we'll need to work our asses off to ensure that digital intelligence is aligned with our goals -- is far less pleasant. It means we need to reevaluate our conception of the future. The problem -- creating predictably benevolent intelligence -- is absolutely overwhelming once you realize its scope.

Most of the challenges we face as individuals and as organizations have to do with other humans. Convincing them to do things, meeting their expectations, competing with other groups, ensuring structure in our organizations. This problem is totally different. There may be only one chance to get it right. It's not about humans, but about a complex structure that we are just beginning to really understand -- the relationship between cognition and "morality", a shorthand for an extremely complex of human-specific rules and tendencies that many of us mistakenly assume automatically prepackaged with any intelligence.

This should not be a religion or a movement. It's an engineering task. Much more mundane than you might think. The philosophy necessary for success may be complex, but the But just because it's mundane doesn't mean that it won't be difficult or the that benefits of success won't be sublime.

It's a difficult task, but it seems possible. We just need to do it. Even if you're not a programmer or AI theorist, intellectual, moral, and financial support means a lot.

Filed under: singularity 16 Comments
7Aug/0811

Dr. Steel on Robotics

The views expressed in this video are solely those of Dr. Steelâ„¢.

Filed under: robotics, videos 11 Comments
6Aug/0816

Vernor Vinge’s Latest Take on the Singularity

Vernor Vinge has an interesting and somewhat unique take on the Singularity, ironic because all the spinoffs are based on his original definition. However, I regularly disagree with some of his points.

One of the points he frequently makes is that a hard takeoff (superintelligence nearly overnight) would necessarily be bad. I disagree -- there are likely to be bad hard takeoffs, and good hard takeoffs. If the superintelligence in question actually cares about human beings, then surely its "hard takeoff" could be orchestrated in such a manner that everyone benefits and no one has their life "flip turned upside down". On the other side of the coin, if the superintelligence didn't give a damn about human beings, then we'd likely have our constituent atoms rearranged into something it considers more "interesting", like a cosmic whiteboard for its beloved mathematical equations.

Favoring a hard or soft takeoff is not like picking between chocolate and vanilla ice cream. Instead of being based on a matter of human preference, it's likely that objective facts about the structure of cognition will dictate how quickly an AI or intelligence-enhanced human would be capable of improving its own intelligence and directing it towards the achievement of real-world goals. These facts include: how smart humans are relative to what's possible, how easy it is to use an abstract theory of intelligence to implement concrete improvements, what sorts of knowledge are necessary to implement these improvements, and so on. Though a soft takeoff may be possible, I tend to focus on the hard takeoff possibility, because it's the primary scenario you can benefit by preparing for in advance. Given a slow takeoff, there is a longer window of opportunity to guide circumstances towards beneficial ends.

So, check out this table I threw together:


If there's a soft takeoff, preparation was probably less crucial all along, though it is still very likely to be helpful. If there's a hard takeoff, preparation was probably necessary, and if you didn't put in the necessary effort (say, because there wasn't any immediate monetary payoff), then you and the rest of mankind could be terminally screwed. By "preparation" here I mean setting the initial conditions of the intelligence explosion directly, either by picking who to test out the intelligence-enhancement machine on or by programming the AI that actually grows up to be the first superintelligence. Anything else, like stockpiling canned goods in your basement, is pretty useless.

Another problem I have with Vinge in this video is that he initially implies that it's impossible to prepare in advance if the Singularity is a hard takeoff. Well, no. The long-term behavior of a superintelligence could very well depend on its initial conditions. Superintelligence derived from an AI programmed just to pick stocks might be less sympathetic to our human plight than a superintelligence derived from an AI programmed specifically with philosophical and moral issues in mind. Though he claims early in the video that it would be useless to prepare for a hard takeoff, near the end he brings up the analogy of children and says that if we are wise in the way we build smarter-than-human intelligence, we might be doing ourselves a favor. This is a welcome chance of emphasis in his positions, as in past years he has largely neglected the possibility that humans might be able to nudge the Singularity in more pleasant directions by manipulating the initial conditions.

I get a weird feeling from all this Singularity coverage by IEEE. Did they cover the topic because they think it might actually happen, or because it's just the hip new thing that all the intellectuals are talking about? Probably the latter, but I can't be sure.

H/t to Bob Mottram for the video.

Filed under: singularity 16 Comments
6Aug/084

Support “The Singularity” Documentary

The Singularity Institute is requesting donations to support the completion of a documentary on the Singularity by Doug Wolens. Wolens is an experienced filmmaker who filmed Singularity Summit 2006 and 2007. Filming is 80% done, and Doug needs an additional $45,000 to complete the documentary in time for this winter's film festivals. He has already interviewed figures such as Ray Kurzweil, David Chalmers, and Peter Norvig. Excerpts of the interviews are available on the donations page.

Here's the blurb for the movie and an explanation of how it helps the Singularity Institute:

"The Singularity" is an investigation into the frontiers of scientific progress. Many important disciplines are coming together to drive this progress – nanotechnology, artificial intelligence, molecular biology, and more. "The Singularity" explores the current boundaries of this research, showing where the trends are leading, and how smashing the intelligence barrier will affect society.

In "The Singularity," award-winning documentary director Doug Wolens addresses vital questions for all of us: Exactly what is likely within our lifetimes? How are things moving so quickly? Who is working to prepare us for the shifts to come? And what should we be doing?

This isn't science fiction. It is the future, and it may be here sooner than we think.

SIAI will directly benefit from "The Singularity" documentary because its purpose is to educate mainstream audiences about the profound changes that will occur in our lifetime as we develop powerful technologies. As SIAI's subjects become understandable to mainstream audiences and the public recognizes the changes that will result from them, SIAI's leadership role will be strengthened."

If you support this, perform the old reach-around and bust out the plastic. That way, when you see the movie, you'll know you supported something highly educational and useful, particularly from a utilitarian perspective.

Filed under: SIAI, singularity 4 Comments