Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

16Jul/098

Making Sense of the Singularity

Mike Treder, Executive Director of the IEET, who will be attending the upcoming Singularity Summit 2009 in New York on October 3-4 (register now!) has commented on the whole idea of the Singularity over at the IEET blog.

Like many Singularity commentaries, Mike's begins by conflating three totally different concepts that all have unfortunately come to be called "Singularity", depending on who you're talking to -- the idea of "a theorized future point of discontinuity" (Event Horizon), "when events will accelerate at such a pace" (Accelerating Change), "that normal unenhanced humans will be unable to predict or even understand the rapid changes occurring in the world around them" (Event Horizon again). Then, according to the IEET Encyclopedia of Terms and Ideas, Mike writes, "It is assumed, usually, that this point will be reached as a result of a coming"intelligence explosion," most likely driven by a powerful recursively self-improving AI." (Intelligence Explosion.) Three definitions in one -- not good for understanding what you are analyzing. If the convention includes all three definitions, the convention should be discarded.

Later on, Mike writes:

Insisting on the possibility -- or, even more strongly, asserting the inevitability -- of an uncertain and debatable but incredibly momentous event leaves proponents vulnerable to a charge that they lack rigor and discipline in their thinking, that they have fallen prey to hopefulness and shed any semblance of healthy skepticism. If they cannot restrain themselves from heartily endorsing an unprovable proposition, than what credibility have they for other declarations or recommendations they might make?

This is a slightly odd paragraph. Firstly... "insisting on the possibility..." of an event? A possibility can be arbitrarily low probability -- it's not very insistent. If there is no possibility, then there is no need for debate. Remember, it isn't even clear which possibility is even being discussed -- is it the Event Horizon, Accelerating Change, or the Intelligence Explosion?

Indeed, no Singularitarians should assert the inevitability of the Singularity. 10 years ago, 5 years ago, and today, only a small minority even do. Ray Kurzweil seems to be one of the very few Singularitarians that even implies it. To say just a couple small things in order to dispel other possible myths about Singularitarians:

1. The Singularity is not necessarily a good thing, a "Singularity" could mean that everyone dies at the hands of a recursively self-improving AI or upload that is indifferent to our welfare.
2. The Singularity is not at all inevitable. We could easily blow ourselves up first.

Other Singularitarians that agree with the above two (seemingly obvious) statements, feel free to say so in the comments if you consider it useful.

More troubling is the suggestion by some (all?) singularitarians that the outcome they seek is not only possible but desirable.

To me, the Singularity is nothing more than creating greater-than-human intelligence... oddly enough, this is Vinge's original definition. Is greater-than-human intelligence necessarily desirable? Absolutely not. If such intelligence(s) have values indifferent to humanity, we could all die, as we've emphasized again, and again, and again. How many more times do we need to say it?

Given the substantial amount of uncertainty -- which they themselves admit -- surrounding the nature and impacts of such an occurrence, it seems imprudent to stamp the Singularity as unquestionably "good thing."

In "a good thing", he links The Singularitarian Principles as a source. But the top of the page says, "This document has been marked as wrong, obsolete, deprecated by an improved version, or just plain old." Really, truly. Yudkowsky, the author of that document, dropped the idea of the Singularity as an unqualified "good thing" around 2002. He's mentioned this in practically every single talk since then. Every "Singularitarian" mentions this practically every time they talk about the Singularity. Are we Pollyannish optimists or psychotic doom-mongers? We can't be both simultaneously.

Mike writes:

Worse yet, some who proudly say they're working to bring about the Singularity have the temerity to proclaim that they alone hold the keys to making it a "friendly" event.

Absolutely not! If anyone ever claimed that, they were wrong. We (as in the SIAI) have only claimed that we focus on the issue of making AI friendly the most enthusiastically of any group working towards AGI, which is true. But we are in contact with many other groups that care about friendliness too, and that is wonderful. It's a research field, which was pioneered by our group, but we can only hope that it will continue to extend far beyond its boundaries, as it has been. (Much more successfully than Mike's belief that attempts to engineer friendliness in seed AI are hubristic, thankfully.) People at Google care about it, people at Hanson Robotics care about it, many independent researchers care about it, and many institutional academic researchers care about it. So, far from claiming that we alone hold any exclusive keys, we welcome anyone to contribute to the field.

Besides sounding childishly naive, such a claim also invokes the specter of technocracy: if only all the big issues of this world were left to the few really smart people to solve, everything would turn out fine.

The need to work towards Friendly AI comes from simple assumptions:

1) Some possible future superintelligences could kill us all, as almost happens in The Matrix and Terminator.
2) Some possible future superintelligences could make the world a lot better, as is portrayed in some sci-fi books.
3) The first seed AI to cross a given threshold could bootstrap itself to superintelligence.
4) The way we design that seed AI could influence the way it subsequently develops.

The same points are made, of course, in an essay by Nick Bostrom (coincidentally, the Chair of the IEET, Mike's host organization), "Ethical Issues in Advanced Artificial Intelligence":

The option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence. On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance. Our entire future may hinge on how we solve these problems.

Who will construct the superintelligence "seed"? Probably a relatively small group, perhaps a few thousand people at most, more likely a few dozen. Is Nick Bostrom "childishly naive" because he acknowledges the importance of initial motivations in a superintelligence and says that the "entire future may hinge on how we solve these problems"? This is one of the only times I have ever seen the Executive Director of an organization criticize the public statements of its Chair so explicitly.

It's implied, moreover, that meddling democrats and pesky government regulation will only slow things down and might even prevent the smart singularitarians from saving the day.

Negative. Where is this implied? It is extremely unlikely that any significant portion of society will take the challenge of superintelligence seriously before it is too late, anyway. This is not a political statement, it's simply a logistic forecast.

It's not an all-or-nothing thing. Participants in democratic politics (like myself) can analyze and vote on issues, government regulation can be responsibly formulated, and singularitarians can work on AI friendliness. Why does there have to be any inherent antagonism between these motivations? I'm a living example that someone can care about all of them at once. You can too.

Those who promote the idea that a Technological Singularity is not only possible and desirable but that its advent can be hastened through our efforts must be aware of the obvious parallels between their own beliefs and those of Christian Millenarians.

Does the same apply to those who think that nuclear fusion is possible and desirable and its advent can be hastened through our efforts? Why not -- nuclear fusion promises a huge abundance of cheap power, and its proponents think it could render fossil fuels obsolete -- a "superlative" view if there ever was one. Actually, it seems like many, many discussed technologies, if not the majority, are considered possible, desirable, and hasten-able through the efforts of the researchers working towards them -- if not, then why would anyone bother with them?

Where is the parallel with Millenarians? There are so many differences. You can read this page for a summary -- it's the first Google result for "Rapture of the Nerds". I am awaiting a rebuttal to the points in that post -- if Singularitarianism is really parallel to Millenialism, then shouldn't it be easy to knock down those arguments?

Finally, the proposal that a Singularity can be managed for "friendliness" seems hopelessly hubristic.

Again, your chairman, Nick Bostrom, argues this. Why be the Executive Director of an organization whose Chair is "hopelessly hubristic" or "childishly naive"?

Bostrom writes:

It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness. How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration.

Friendliness in AI is necessary -- the alternative is building it according to some non-friendly optimization criteria. And who could argue that an altruistic seed AI or nascent superintelligence wouldn't be more likely to develop into a benevolent superintelligence than one built without the "hopelessly hubristic" attitude that Mike denigrates? The attitude that morality in AI is necessary is not unique to Singularitarians -- it is shared by serious academics like Wendell Wallach and Colin Allen, the authors of Moral Machines.

Would you rather have a seed AI based on Gandhi or Hitler? Is it "hopelessly hubristic" to think that a seed AI (or enhanced human intelligence) based on Gandhi would be much more likely to lead to a positive future for humanity than one based on Hitler? Morality is inherent to the structure of the brain -- it must exist as motivations in the cognitive architecture of the agent. It will not be instilled through mere osmosis, as it is with some children. A psychopath can spend a thousand years around the nicest guy in the world and not change a bit, because his neural structure is that way, and barring neuroengineering, that isn't going to change.

Perhaps, the attitude that friendliness is necessary is actually required to ensure our future survival, as Nick Bostrom concludes in his paper:

It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness.

Pretty simple.

16Jul/096

Garett Jones on IQ as a Social Multiplier

Roko, author of Transhuman Goodness, who I just met in person for the first time at the 2009 Singularity Institute Summer Intern Program in Santa Clara, pointed me to an interesting economist, Garett Jones, assistant professor at George Mason University and colleague of Robin Hanson. His work is quite provocative:

As a macroeconomist, I investigate both long-term economic growth and short-term business cycles. My current research explores why IQ and other cognitive skills appear to matter more for nations than for individuals.

For example: A two standard deviation rise in an individual person's IQ predicts only about a 30% increase in her wage. But the same rise in a country's average IQ score predicts a 700% increase in the average wage in that country. I want to understand why IQ appears to have such a large social multiplier.

The story is much the same for math and science scores: A person's individual score predicts little about how she'll do in the job market, but the richest and fastest-growing countries in the world tend to do much better on math and science tests. If the IQ multiplier is even half as large as it appears to be, then health, nutrition, and education policies in developing countries should be targeted at raising the brain health of the world's poorest citizens.

An even more important implication of my research is that low-skilled immigrants should be allowed to migrate to the world's richest countries: Low-skilled immigrants have little or no net effect on the wages of the citizens of rich countries, but their lives massively improve when they immigrate to these countries.

In the past, I've worked on Capitol Hill and I've studied the monetary transmission mechanism. I speak on policy topics regularly via Mercatus' Capitol Hill Campus program and in other forums. Recent media include Forbes.com, Fortune.com, Wisconsin Public Radio, and CNN.com.

I've heard the idea before that improving IQ among the world's poorest might be the best way to improve their lot in life, but it's nice to see an economist primarily focused on the concept. For starters, we need to encourage the addition of iodine to salt and iron to bread in developing countries, as has been done in developed countries since WWII. According to the WHO, in 2007, nearly 2 billion individuals had insufficient iodine intake, a third being of school age. The numbers are similar for iron deficiency. Implementing these relatively cheap measures would be easy, if it weren't for the controversy of acknowledging that IQ exists and can be improved.

Filed under: intelligence 6 Comments
16Jul/091

Announcing Singularity Summit 2009

For the last couple months, I've been working intensely on laying the groundwork for the Singularity Summit 2009, to be held in New York October 3-4. Now that it's been announced on KurzweilAI.net, I can finally talk about it.

This is the first Singularity Summit to be held on the East Coast. For that, and other reasons, it's a huge deal. The lineup of speakers is fantastic, including David Chalmers, Ray Kurzweil, Aubrey de Grey, and Peter Thiel, among many others. Like the epic Singularity Summit 2007 that landed on the front page of the San Francisco Chronicle, this Summit will be a two-day event.

The speaker lineup is very diverse, definitely the most diverse out of any Summit thus far. To quote Michael Vassar, President of SIAI, on KurzweilAI.net, "Moving to New York opens up the Singularity Summit to the East Coast and also to Europe. This Summit will extend the set of Singularity-related issues covered to include deeper philosophical issues of consciousness such as mind uploading, as well as life extension, quantum computing, cutting-edge human-enhancement science such as brain-machine interfaces, forecasting methodologies, and the future of the scientific method."

You can register here. A page with banners for promotion is here.

With discussion about the Singularity heating up like never before, this could be the most exciting Summit yet. SIAI is stepping outside of our comfort zone in Silicon Valley, and into an entirely new area. It will be thrilling to jumpstart discussion on the Singularity in New York City and the East Coast.

Filed under: SIAI, singularity 1 Comment
13Jul/090

New Research from Joshua Greene: ‘Neuroimaging suggests that truthfulness requires no act of will for honest people’

Joshua Greene, author of one of the most important papers for understanding the need for Friendly AI, brings us new research:

CAMBRIDGE, Mass. -- A new study of the cognitive processes involved with honesty suggests that truthfulness depends more on absence of temptation than active resistance to temptation.

Using neuroimaging, psychologists looked at the brain activity of people given the chance to gain money dishonestly by lying and found that honest people showed no additional neural activity when telling the truth, implying that extra cognitive processes were not necessary to choose honesty. However, those individuals who behaved dishonestly, even when telling the truth, showed additional activity in brain regions that involve control and attention.

The study is published in Proceedings of the National Academy of Sciences, and was led by Joshua Greene, assistant professor of psychology in the Faculty of Arts and Sciences at Harvard University, along with Joe Paxton, a graduate student in psychology.

"Being honest is not so much a matter of exercising willpower as it is being disposed to behave honestly in a more effortless kind of way," says Greene. "This may not be true for all situations, but it seems to be true for at least this situation."

Read more at Eurekalert.

Filed under: ethics No Comments
12Jul/091

George Dvorsky on the End of Science

See here.

I generally disagree with George here. We can defeat aging, run cars on renewable fuels, address climate change, and develop a sustainable energy source without a fundamental breakthrough like quantum mechanics. It could be the end of scientific revolutions as we know them -- it's hard to tell. We could still have extreme (incremental) progress in science without discrete, paradigm-shifting revolutions.

I really need to read John Horgan's book, The End of Science. Even though I found him rude in our exchange, I have sympathy for some of his ideas, like the notion that war might be eliminated or that fundamental scientific revolutions may be over.

At the very least, it could be that human-facilitated paradigm shifts are over, and that superintelligence is necessary to tear further holes in the fabric of the Veil of Maya.

But ultimately...

"Nobody actually lives in external reality, and we couldn't understand it if we did; too many quarks flying around." -- Eliezer Yudkowsky

Filed under: science 1 Comment