Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

17Aug/0989

Superlongevity, Superintelligence, Superabundance

Dale Carrico, one of the more prominent critics of transhumanism, frequently refers to "superlongevity, superintelligence, and superabundance" as transhumanist goals, of course in a disparaging way. Yet, I openly embrace these goals. Superlongevity, superintelligence, and superabundance are a perfect summary of what we want and need. How can we achieve them?

Superlongevity can be achieved by uncovering the underlying mechanisms of aging and counteracting them at the molecular level faster than they can cause damage. Huge research project, a long-term effort, but definitely worth the time and money. Leading organization in this area? The SENS Foundation.

Superintelligence will be a difficult challenge, creating an intelligent being smarter than humans in every domain. It could take decades, or possibly longer, but it does seem possible. There are various possible routes to superintelligence: brain-computer interfacing, neuroengineering, and last but not least AI. I humbly offer my own organization, the Singularity Institute, as the leading organization in this area, but it is entirely possible that another group will get there first.

Superabundance can be achieved by creating programmable self-replicating machines powered and supplied by easily available resources and materials, like generic carbonaceous material (such as topsoil, or better yet, calcium carbonate), water, and the Sun. Then, making practically unlimited quantities of carbon-based products would be as simple as owning the fertile land and flicking a switch. You may have noticed that plants operate the same way. Another huge, difficult task. RepRap might be considered an embryonic version.

Achieving superlongevity, superintelligence, and superabundance will be incredibly challenging, but seemingly inevitable as long as civilization continues to progress and we don't blow ourselves up or have a global fundamentalist dictatorship on our hands. There is no guarantee that we will achieve these goals in our lifetime -- but why not try? Achieving any of these milestones would radically improve quality of life for everyone on Earth. The first step to making technological advancements available to everyone is to make them available for someone.

Filed under: transhumanism 89 Comments
17Aug/096

Robin Hanson on SETI in USA Today

Robin Hanson, economist and author of Overcoming Bias, recently appeared in USA Today talking about SETI. He appears as a counterpoint to Seth Shostak, a guy who I believe is totally out of it. Here's the relevant section:

But researchers such as Robin Hanson of George Mason University in Fairfax, Va., wonder whether the big picture really looks so promising when it comes to advanced life. Hanson supports SETI but finds it telling that humans haven’t come across anything yet. “It has been remarkable and somewhat discouraging,” Hanson says, “that the universe is so damn big and so damn dead.”

Great quote, love it. To quote Marshall T. Savage, author of that superlative masterpiece, The Millennial Project:

There is a program to actively search for signals from other civilizations in the galaxy: SETI (Search for Extraterrestrial Intelligence). This is a noble cause, but it seems slightly absurd. Scientists huddle around radio telescopes listening intently to one star at a time for the sound of dripping water, when what they are seeking would sound like Niagara Falls. The most cursory radio snapshot of the sky should reveal K2 civilizations as clearly as the lights of great cities seen from orbit at night. That we don't see any such radio beacons in the skies probably means there are no Kardasahev Level Two civilizations in this galaxy.

Perhaps advanced civilizations don't use radio, or radar, or microwaves. Advanced technology can be invoked as an explanation for the absence of extra terrestrial radio signals. But it seems unlikely that their technology would leave no imprint anywhere in the electromagnetic spectrum. We have been compared to the aborigine who remains blissfully unaware of the storm of radio and TV saturating the airwaves around him. Presumably, the aliens use advanced means of communications which we cannot detect. What these means might be is, by definition, unknown, but they must be extremely exotic. We don't detect K2 signals in the form of laser pulses, gamma rays, cosmic rays, or even neutrinos. Therefore, the aliens must use system that we haven't even imagined.

The argument, appealing thought it is, cannot survive contact with Occam's razor -- in this case Occam's machete. The evidence in hand is simply nothing -- no signals. To explain the absence of signals in the presence of aliens, demands recourse to what is essentially magic. Unfortunately, the iron laws of logic demand that we reject such wishful thinking in favor of the simplest explanation which fits the data: No signals, no aliens.

The skies are thunderous in their silence; the Moon eloquent in its blankness; the aliens are conclusive by their absence. The extraterrestrials aren't here. They've never been here. They're never coming here. They aren't coming because they don't exist. We are alone.

If Dr. Shostak wants to find some aliens, perhaps he should try ingesting some powerful hallucinogens. Then he will be able to see all the aliens he wants.

Filed under: policy, science, space 6 Comments
17Aug/090

Seasteading Institute Conference and Floating Festival to Set Sail in September

I just got an email from Seasteading Institute President Patri Friedman letting me know about the organization's upcoming conference and floating festival, which will be September 28th - 29th for the conference and October 2-4 for the festival. Yes, they are having a floating freedom festival, as Patri calls it.

Ephemerisle (floating freedom festival): Website, Press Release
Seasteading 2009 Conference: Website, Press Release

I am planning to attend the seasteading conference and right after that will fly to New York to set up for Singularity Summit. Cool!

Filed under: events No Comments
16Aug/095

Friendly AI Supporter Solves Super Mario

Robin Baumgarten, a PhD student at Imperial College, London, author of the AI Panic blog, and fellow Friendly AI supporter, recently got some nice blog coverage for creating an AI (script, really) that plays Super Mario more effectively than any human. Check it:

At this point I must brag that I have beaten Lost Levels. The tactics that the script uses can actually work pretty well -- in a lot of the harder levels, running semi-blindly seems to work better than taking it slow and easy, which just puts you at greater risk of being attacked. I wonder -- which video game will get solved next? Many of them seem trivially easy, but platformers (like Mario) seem relatively challenging, from an AI perspective.

Congrats, Robin! I am reminded of the Black Belt Bayesian post "Speedrunning Through Life". Superintelligences will speedrun through real life problem-solving, analysis, and mediation in the same way that Robin's AI speedruns through Mario. Though the real world is more complex than a game, the concept is fundamentally the same. I find it hilarious that there are humans who actually believe that their problem-solving acumen and speed is about as good as anyone could get, or that the real world is anything but a highly complex video game where you can feel pain and death is final.

Filed under: AI 5 Comments
16Aug/098

I Have the *One Secret* of Friendly AI… Not.

David Brin recently wrote a post on AI morality that I thought was sort of anthropomorphic. Read his post, then here's my response:

I think you're being somewhat anthropomorphic by assuming that by extending a hand to AIs they'll necessarily care. A huge space of possible intelligent beings might not have the motivational architecture to give a shit whatsoever even if they are invited to join a polity. The cognitive content underlying that susceptibility evolved over millions of years of evolution in social groups and is not simple or trivial at all. Without intense study and programming, it won't exist in any AIs.

Establishing that motivational architecture will be a matter of coding and investigation of what makes motivational systems tick. If you've created an AI that is actually susceptible to being convinced to joining society based on questioning mental delusions, or whatever else, you've already basically won.

The challenge is in getting an AI from zero morality whatsoever to roughly human-level morality. Your solution here seems to assume that the roughly human-level morality already exists, then making suggestions on that basis.

For more on anthropomorphic thinking and AI, I recommend the following:

http://intelligence.org/upload/CFAI/anthro.html

You can think "I might be mistaken" all day, program it into AIs, and communicate with them on that basis, but in the end, without the proper programming (unconditional kindness), that insight is entirely irrelevant. I think the challenge of programming is unconditional kindness is a much bigger slice of the challenge than establishing minds that are self-questioning... for an AI to be created at all, it seems like a self-questioning mentality of some sort would be an absolute necessity.

David Brin sounds like he is talking about a way to get along with fellow humans. Basically, the proposal seems to be this:

If we all acknowledge that we are imperfect beings that could be wrong, and transfer that belief to AIs, then everything will be okay.

This has a warm fuzzy intuitive usefulness, but as I say in my comment, I think it misses the biggest chunk of the challenge. You can have a self-questioning psychopath. You can have a self-questioning murderous conqueror. You can have a self-questioning being caught in a wirehead that absorbs all matter on the planet to build itself a larger pleasure center. The feature of self-questioning is necessary but not at all sufficient to the challenge of building AIs that we consider moral. This sort of physics envy mentality for AI morality is reminiscent of the physics envy we see in AI in general -- the idea that there is one primary principle that we can apply and after which we'll have succeeded. It would be nice if solving Friendly AI were truly that easy, but I don't think it will be.

As for Mike Treder's comments on how "our initial attitudes toward such creatures may color the entire outcome of a purported "technological singularity", again, this seems to be anthropomorphism. This is exactly what we would say if we were encountering a new human tribe for the first time! That sort of anthropomorphic reasoning seems unhelpful with AI. What we are talking about here is a completely blank canvas with great flexibility. Once we come up with a general theory of goal systems and can apply it, it shouldn't be difficult to create minds that are nice no matter how much abuse we throw at them -- not that I'm saying that we should, but that it would be theoretically possible. The connection between abuse and emotional hatred towards the abuser is entirely a byproduct of Darwinian evolution. It is crucial to understand that the features of morality we consider "typical" are almost completely such byproducts, including the very notion of an observer-centered morality.

We have seen examples of people who remain kind or at least nonviolent even under the worst abuses. All it takes is one instance to prove that such a mind is theoretically possible. If we can build minds that react negatively to abuse, we can also build minds that react neutrally or even positively to abuse, or at the very least lack a pre-coordinated emotional reaction to abuse. For more, read "Beyond anthropomorphism", which starts with the following paragraph:

Imagine, for a moment, that you walk up and punch an AI in the nose. Does the AI punch back? Perhaps and perhaps not, but punching back will not be instinctive. A sufficiently young AI might stand there and think: "Hm. Someone's fist just bumped into my nose." In a punched human, blood races, adrenaline pumps, the hands form fists, the stance changes, all without conscious attention. For a young AI, focus of attention shifts in response to an unexpected negative event - and that's all.

Lots of presumed planning or preparation for dealing with AIs seems to revolve around assuming the AIs will have a pre-programmed human-like morality and all we need to do to coexist with them is to appease that morality as we would an unfamiliar human tribe. In light of the fact that human morality is only one goal system in a huge space of possible goal systems, why think in these terms? The challenge is not in interacting with anthropomorphic AIs, but programming (at least the most powerful) AIs to begin with so they lack anthropomorphic features and exhibit unconditional kindness.

We cannot instill morality in non-human entities by reference to human history, as David Brin and others do. We must look at the foundations of how motivational systems work, which requires evolutionary psychology, cognitive science, computer science, and game theory, not politics or human-centric philosophy. A lot of it is relatively technical, boring, not very relevant to our current human-only situation, but nonetheless a life-or-death issue for humanity. The question of "program a morality that doesn't kill you if the optimization process implementing it is superintelligent" is a Test unlike any other -- most of the time, you're likely to miss the target. Value is fragile, and without a precise implementation, it will break to pieces, and what is left will be more alien (and indifferent to values-as-we-know-them) than anything else that has ever walked the Earth.

In his post, Brin also seems to implicitly say that a singleton would be a bad idea. Maybe it is, but the alternatives seem worse. See here for a justification of the singleton idea. Brin says, "The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other to account and ensuring everything is above-board." I do agree... most of the minds above humans should be a diversity of mega-minds. Above all that, an essentially impartial singleton (which may not even have a unified identity, and be more like a fancy treaty) that prevents Tragedies of the Commons and the like. When you first hear the word "singleton", you may think of a tyrannical dictator, but really, that is only one possible singleton. In less than an hour you can read Bostrom's "The Future of Human Evolution" and have a much better idea of what it's all about. Just an hour!

Filed under: friendly ai 8 Comments
14Aug/090

Sticky Fingers? Tiny Robots to Grip Nanotubes

Here's some tiny robot micro-hands that can grip nanotubes. The "sticky fingers" phrase references a critique of advanced nanotech that Richard Smalley made around 2003.

Whenever I hear about mini robotic hands, I always think about how relevant they would be to the Feynman path proposal.

(Image: NanoHand)

14Aug/093

US Military Embraces Robot ‘Revolution’

Here.

Filed under: robotics 3 Comments
13Aug/091

“Where Tech and Philosophy Collide”: BBC on Transhumanism

The BBC has an article out on transhumanism. Apparently Charles Lindburgh was a transhumanist now. I think whoever told the reporter that probably meant that Lindburgh and transhumanists are both pioneers.

Congratulations to UK transhumanists on their PR success.

Filed under: transhumanism 1 Comment
13Aug/0914

Michael Anissimov on C-Realm

I will be talking to KMO on the C-Realm podcast in about 15 minutes.

Filed under: events 14 Comments
12Aug/092

100 Ways to Avoid Dying

In case cryonics, regular exercise, and contributing to life extension research aren't working for you, here are another 100 ways to avoid dying.

Filed under: humor 2 Comments
12Aug/09Off

In-vitro Meat: Would Lab-Burgers be Better for us and the Planet?

Nice article on in vitro meat at CNN. Big congratulations to Jason Matheny. You're a winner. Soon we will be able to stop eating animals, which everyone knows deep down might be conscious (though they like to underweight the probability due to their love of eating them).

First step: eliminate the killing of animals by humans for food. Step two: rearrange the entire ecosystem so that predators cannot harm conscious prey. A fairly modest proposal, if you ask me.

11Aug/0920

A Nice and Meaty Introduction to Friendly AI

I would strongly prefer to avoid the bad-faith discussion/debate with Mike Treder, Managing Director of the Institute for Ethics and Emerging Technologies (how much longer must we be attacked as if we were a cult that is as blinded to reason as the worst fundamentalists?), but in a recent post he raised legitimate questions that may be of interest to those new to the concept of Friendly AI, so I will address them. After defining the basic concept of the intelligence explosion (recursively self-improving superintelligence), Mike writes:

The rub, of course, is that this brainy new intelligence might not necessarily be inclined to work in favor of and in service to humanity. What if it turns out to be selfish, apathetic, despotic, or even psychotic? Our only hope, according to “friendly AI” enthusiasts, is to program the first artificial general intelligence with built-in goals that constrain it toward ends that we would find desirable. As the Singularity Institute puts it, the aim is “to ensure the AI absorbs our virtues, corrects any inadvertently absorbed faults, and goes on to develop along much the same path as a recursively self-improving human altruist.”

So, what we want is a very very smart friend who will always be trustworthy, loyal, and obedient.

(Could obedience be too much to hope for, though, since the thing will not only be more intelligent but also much more powerful than us? When this question is raised to the friendly singularitarian, the answer given is usually something like, because we’ve seeded the AI with our virtues, we’ll have to trust that whatever it does will be to our benefit—or at least will be the right thing to do—even if we can’t comprehend it. Along the same lines as, God works in mysterious ways, and His ways are not for us to understand.)

There are two possible approaches to dealing with the possibility of advanced artificial general intelligence (AGI), which I believe could become a reality within a few decades or less:

1. Ignore it and let AGI happen on its own. Let the chips fall where they may.

2. Try to do something to ensure that the new intelligence is coupled with a human-friendly goal system.

It seems pretty obvious to me that 2 is the way to go. (If anyone disagrees, by all means say so in the comments.) After that, the next question comes along -- how?

Since its founding in 2000, the Singularity Institute for Artificial Intelligence (SIAI) has been devoted to that question, as well as the question of how to reformulate decision theory in such a way that it can be reflective (assign utilities to its own cognitive content without wireheading) and handle ambiguities like Pascal's mugging. In the last few years, our work has been covered by media outlets like Forbes, The New York Times, and The San Francisco Chronicle, including front page mentions in the last two. Pretty good for a marginal Robot Cult.

In 2001, SIAI researcher Eliezer Yudkowsky published "Creating Friendly AI", the first book-length stab at the challenge of how to program an AI that you can trust with human-surpassing intelligence and the ability to modify its own programming. This treatise, which is now semi-obsolete, served as the background to the much shorter policy document "SIAI Guidelines on Friendly AI". It describes a possible approach to the problem we call "Friendly AI", with specifiable features called "Friendliness", and proposed several good ideas including:

1. Programmer-independent morality. (The programmers should try to write a goal system that treats all humans equally rather than favoring any specific type of human.)

2. Distinguishing Friendliness content, acquisition, and structure as separate pieces of the problem.

3. Arguing that anthropomorphic political rebellion in AI, upon which most science fiction stories involving runaway AI are based, is absurd.

4. Making a distinction between assumptions "conservative" for futurists (AI won't be here for a while) and assumptions "conservative" for programming advanced AI goal content (the AI could eventually acquire power quickly after which point it would be impossible or difficult to change its programming).

5. Proposing a cleanly causal goal system, topped by a probabilistic supergoal, as the safest in an AI that can reprogram itself. Alternatives would include associative or spreading-activation goal systems.

6. Describing why an observer-centric morality would not emerge automatically in any goal system, as it has in most (but not all) organisms crafted by evolution and natural selection.

7. Layered mistake protection, which is pretty intuitive.

8. The importance of avoiding adversarial injunctions, which would be based on the assumption that an AI with a goal system programmed from scratch would have an inherent tendency to behave like a Machiavellian human being.

9. The danger of subgoal stomps, where a subgoal of the main supergoal acquires so much utility that it swamps the supergoal altogether. An example would be an AI programmed to "help humans" that infers that humans like pleasure, then decides that the best way to help humans would be to lock them up in cages (where they can't hurt themselves) with their pleasure centers constantly being stimulated electrically.

10. Many others. (You can see some of them by skimming the table of contents.)

A shorter summary of Friendly AI features is also found here, though again, these writings are 8 years old. Many of Eliezer Yudkowsky's more recent ideas on Friendly AI theory and epistemological grounding can be found in over a year and a half of extensive blog posts (many of which are 5+ pages in length each), which will soon be compiled into a book. Also, other specialists have joined the dialogue since 2001, including Ben Goertzel, Stephen Omohundro, Richard Loosemore, J. Storrs Hall, and a handful of others. In the past few years, two books have come out about the topic, Beyond AI by J. Storrs Hall and Moral Machines by Wendell Wallach and Colin Allen. These are not obscure books -- Moral Machines has been reviewed by the The Times Higher Education, Notre Dame Philosophical Reviews, and Computer Now, the periodical of the IEEE Computer Society.

In my opinion, these investigations are a substantial improvement on what came before, which consisted mainly of statements to the effect that the problem was already solved by Asimov's laws of robotics, or that we would inevitably be treated well or badly by AI and its initial goal system would have nothing to do with it. (Many other roboethicists have stepped forward since to agree that Asimov's laws are woefully insufficient and the idea of an inevitably positive or negative outcome is foolish.) The problem is not solved by recommendations proposed thus far, remains unsolved to this day, and ultimately any solutions will have to be verified by computational experimentation. But it is a start. It's only the future of the human species hanging in the balance, after all.

In the transhumanist and futurist community, there is constant discussion and debate about the Friendly AI concept, a discussion that has recently extended well outside its traditional community and into mainstream AI and roboethics circles. The challenge about this discussion is that only a small minority of the participants have even bothered to read the few documents and books in the field which exist, because the field is so new that in most places there is little social pressure to be informed. This reminds me of the approach taken to political issues, where everyone feels qualified to debate even with a bare minimum of information, and everyone's opinion is supposed to be equally valued even if the knowledge and analysis behind those opinions varies wildly.

With that background, now I can respond to Mike Treder's comments. The key distinction lies in the difference between 1) what we would prefer, and 2) what we think is actually likely. Many people would prefer that all sentient agents on the planet continue to be roughly on par with regards to intelligence and power forever. That is the position of bioethicists like Wesley J. Smith, Senior Fellow at the Discovery Institute, and articulated on his blog. He believes that a mono-species society of sentient beings is necessary to avoid societal collapse.

Others, like myself, see no hope in trying to preserve that structure. We see an increasing diversity of intelligent beings as inevitable given improving technology, and claim that vast power and intelligence differentials in the mid and long-term future are unavoidable. Instead of delaying the inevitable, we prefer to increase the chances that such beings are friendly to humans by making the starting point -- the pebble that starts the avalanche -- as human-friendly as possible. That way, the friendlies get a head start on the unfriendlies.

Those uncomfortable with the notion of fundamentally stronger and smarter beings than present-day humans will just have to nuke every major city and research lab on Earth, because it seems like there are dozens of human enhancement and artificial intelligence research paths and economic incentives that would eventually create such beings, given enough time. The drive of humans to create better, stronger, faster, and smarter artifacts and tools is built into our DNA. If you don't like it, well, maybe you can go off to live in the woods, or eventually leave the solar system. I believe that people have the right to be left alone, as long as they leave others alone. The universe is a pretty damn huge space, and I think there's room enough for everyone. There are thousands of locales on the planet you can move to today and only run across other people every week or so, if that. Parts of Norway, Canada, Sweden, and Finland come to mind. I won't even spit on you as you're leaving, as so many progressives seem to have the need to do.

The point is not to ensure that smarter-than-human beings are "obedient" to us, merely that they respect the rights of all other sentient entities -- not rocket science, really. The problem is that those complex social values have been crafted into us by millions of years of evolution, and although they seem simple to us, they ain't so simple when you break it down into machine language. If we are going to create human-level AI, we'll need to confer our values to them, or we're going to have powerful optimization processes with the moral complexity of ants. Creating an AI with a blank slate goal system and then teaching it "moral lessons" will not be enough -- every human child is born with a complex set of social instincts that actually enables them to be taught moral lessons -- you actually have to program in the cognitive structure yourself, or at least give the AI unambigious directions to acquire that cognitive content on its own, and not go about pursuing goals in the real world until it has reached a certain level of moral sophistication.

Programming a Friendly AI will also teach us more about ourselves. Our own moralities are rife with inconsistencies and blind spots that are pretty much a given in anything constructed in such a haphazard way as the human brain. Evolution's task of evolving complex organisms from simpler ones has been likened to upgrading a small boat into a huge yacht while still being able to navigate stormy waters effectively every bit of the way. This is especially difficult with the brain, where single mutations can lead to global system changes which may be more adaptive and expedient but hardly elegant or flexible. Evolution has only a set of brutal and simple requirements -- outcompete the other guy, find a mate, have children, and die. Instead of survival of the fittest, it should be termed survival of the fitter. The fact that humans degenerate time and time again into heartless animals when the shit truly hits the fan shows how tenuous our "Have a Nice Day!" society truly is.

Mike Treder talks about the idea of "constraining" future AIs towards goals "we would find desirable". Let me respond to this in two parts. First, an AI has to be given some goals or it will just sit there. Any type of goal system whatsoever is necessarily a constraint on differential desirability and actions. It cuts down the space of possible actions from every possible action to a narrower set of actions that is actually useful. It can be considered giving the agent the structure to do anything at all. Even an AI just programmed to sit still and look at data must have a goal system. We ourselves have goal systems because they were programmed into us by an interaction of nature and nurture. Since AIs will not be born with complex neurologies, they will have to be programmed somehow.

So, if we must give an AI a goal system to prevent it from standing motionless, rusting up, and blowing away in the wind, then we must decide whether to give it a goal system we see as desirable or one we see as undesirable. The answer seems obvious, but Treder seems to insinuate in his post that programming AIs with goals we find desirable would somehow be a bad thing. The insinuation is that by programming an AI with goals we find desirable we would somehow lock it in to our pedestrian, limited, early 21st century human version of morality. Thankfully, our researchers, in communication with other scholars and researchers around the world, have been aware of this problem and thinking about it for over a decade. In fact, the phrase "open-ended" in connection to AI goal content appears in "Creating Friendly AI" over a dozen times. If Mike Treder had read that document ever before, he might remember that, but I don't think he has.

The point is that for morality to continue to evolve and improve, it will have to be transferred to our "mind children", or the whole fragile system will break. Sophisticated moralities do not pop out of the ether overnight. Unfortunately, the dominant moral philosophy of human history, moral realism, and its good friend the blank slate strongly imply otherwise, creating a Betelgeuse-sized headache for those of us whose jobs and passions revolve around breaking open the black box of human morality and trying to take a serious look at its components. The underlying structure of morality does not consist of statements such as "Thou shalt not kill", or "Thou shalt not steal" -- it consists of highly complex and evolved cognitive adaptations which were crafted in the furnace of millions of years of heated evolutionary activity on the plains of Africa. Moral statements are the surface products of a complex and subtle suite of underlying neurological processes, just like karate moves are the surface products of a complex and subtle suite of underlying neurological processes in places like the motor cortex and prefrontal lobe.

More recently, Eliezer Yudkowsky has called this complex set of human drives "Godshatter", after a term in Vernor Vinge's Fire Upon the Deep. I'm not sure whether I like that term so much (it sort of invokes the idea of God shitting everywhere), but for now it will have to do. The key idea is that evolution's monomaniacal goal "survive and reproduce!" eventually got "shattered" into thousands of sub-goals (philosophy, music, entertainment, communication, art, etc.) that derive from cognitive adaptations for increased fitness but contribute to fitness in odd and seemingly indirect ways. Evolutionary psychologists build careers out of picking one of these drives and trying to explain why it is adaptive.

The goal is to create an AI that recognizes and understands that complex set of goals and ensures they are not eliminated in a future where creating a new agent from nothing will be as simple as building a computer and giving it the right programming. The goal is an open-ended goal system that develops in a way at least as good as a recursively self-improving human altruist. By using techniques like wisdom tournaments, which are essentially moral and ethical stress tests, we can hope for a system that actually begins its ascent into superintelligence with substantially better-than-human morality. If you don't like the transhumanist futurist rhetoric sometimes associated with these ideas idea in online discussions, you can see an entirely academic analysis in Moral Machines. Look for it to pop up elsewhere, and remember, we started the serious dialogue! I was still 17 and attending McAteer High School here in San Francisco when I first recognized the importance of the challenge of Friendly AI.

The ultimate goal would be an AI you trust with increased intelligence more than any available human or combination of humans. Some entity must cross the line into superintelligence eventually, unless there is a global thermonuclear war (or something similarly unpleasant) that blows us all to smithereens. In practice, I think that human-equivalent AI will come before substantially enhanced biological intelligence unless the developed countries substantially loosen their restrictions on testing unproven implants in living human brains (not bloody likely), and from human-equivalent AI will quickly follow superintelligent AI. That is another argument, though -- even if there is a slow takeoff, it wouldn't hurt having as friendly of an AI as possible take the first steps in that direction. Would you trust a moral insect with the power that is general intelligence?

The point is not obedience or blind loyalty, it's handing the torch of a complex morality from one species to the next. Without some moral goal system content to start off with, human-equivalent AIs will be in the moral wilderness. We will eventually live with a fundamentally greater intelligence above us. I don't think that this can be stopped. Even today, the leaders of Russia, China, France, Israel, Iran, and many other countries could start a World War that kills us and tens or hundreds of millions of people, if not billions. Power disparity, to some extent, is something we have to live with. The question is not whether or not there will be more powerful beings than ourselves, but "will those beings care about us?" If it turns out that AI inevitably loses its empathy for human beings after successive rounds of self-improvement, then we have no choice but to destroy all of our computers, because someone will eventually discover the underlying principles of intelligence (just like scientists in the past discovered the underlying principles of chemistry, biology, physics, and so on) and implement it on whatever computers are available.

I do not buy into the defeatist stance that a more powerful being will invariably see beings below it as inferior and subject to extermination, because 1) there are billions of vivid examples to the contrary, such as humans that care about weaker animals, and 2) if consistent niceness is neurologically possible, and there are mechanisms that switch between niceness and meanness conditional on certain stimuli, then it shouldn't be outside the realm of possibility for a being to exist where that switch simply doesn't exist -- where it is unconditionally benevolent. Views to the contrary seem to depend on moral realist views where a God-like force "pushes" a benevolent being to Machiavellian, Darwinian-crafted-organism-distinct malevolent/selfish impulses even if the starting point is totally benevolent. This results from a misunderstanding about where human morality comes from, and adherence to that troublesome moral realism/blank slate bugaboo.

To make the point even more simply (TL;DR), benevolence is not obedience. The world's most powerful being can refrain from killing you without obeying you. I do not refrain from eating pigs or cows or chickens because they have ordered me to. I do it because my moral structure -- part of my core identity -- led me to that belief. It could also lead me away from it, if I were convinced that these animals did not have the conscious experience of pain.

Will we be "forced to trust" whatever future benevolent AIs do, just because we've given these AIs some moral starting point and they have continued to develop it? No. But giving these AIs some sort of morality is still much better than none, and large power differentials will still exist. This concept seem easy enough for IEET Chair Nick Bostrom to understand. In fact, he has been a leader in pushing for the idea internationally. So why do IEET Managing Director Mike Treder and IEET Executive Director James Hughes seem so consistently confused by it? My only guess is because they have not bothered to engage in the most basic reading. They may genuinely disagree, but if so, I would never know, but I never hear object-level arguments against it, only ad hominem arguments, just like Dale Carrico.

So, in summary, I have explained the singularitarian/friendly AI supporter answer to Mike's concerns in a blog post of over 3,000 words, which is hardly like saying "just trust us". Though Mike's view has already been tarnished because he sees it as his duty as a progressive to attack hypotheses he views as un-democratic, regardless of their evidence, others can look at the problems we (as a species) are facing with an open mind, particularly with regard to the question of how to transfer our values to the second intelligent species ever to exist on this planet. Some questions are too complex to be solved by voting alone -- someone has to do the math. If you are a startlingly gifted theoretician or programmer with experience in studying decision theory, machine learning, and advanced mathematics, we might be able to use you. Don't hesitate to get in touch!

Filed under: friendly ai 20 Comments