Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

28Feb/1119

Michael Vassar Speaks to Yale Students on the Singularity

Coverage from Yale Daily News:

Twenty to 60 years from now, the advent of computers with above-human intelligence could transform civilization as we know it, according to Michael Vassar, president of the Singularity Institute for Artificial Intelligence. In a talk with around 35 students and faculty members in William L. Harkness Hall on Sunday, Vassar expounded the vision that his institute, featured in a Feb. 10 article in TIME Magazine, is working to make a reality. Known as the "singularity," this futuristic scenario posits that artificial intelligence will surpass human intelligence within the next half-century. Once super-intelligent computers exist, they could generate even more intelligent and sophisticated machines, to the extent that humans would lose all control over the future, Vassar said.

"For the most important event in the history of events, it really should get a fair amount of buzz," he said.

Vassar compared human and chimpanzee intelligence to argue that small changes in a system can represent large leaps in mental capacity. Just as a human is a small evolutionary step from other primates, a super-intelligent computer would be a natural progression as artificial intelligence approaches human intelligence, he said.

Our computers are not as smart as humans yet, but if technological progress continues at its current rate, one could expect to see them in the next 20 to 60 years, Vassar said. Probably the most well-known example of artificial intelligence right now is Watson, an IBM computer that competed alongside humans on the quiz show "Jeopardy!" this month.

Continue.

Filed under: SIAI, singularity 19 Comments
25Feb/1146

Michio Kaku on 2013 Solar Maximum: “It Would Paralyze the Planet Earth”

Maybe it's nothing at all! Maybe. Still, I have enough room in my thoughts to consider this, even if the probability is low. I don't think anyone has the expertise to say for sure one way or the other.

A real analysis would involve probability distributions over solar energy flux and expensive tests on electronic equipment.

This is a good test case for our reasoning on global risk probabilities -- are we quick to make unqualified judgments, or are we willing to spend the time to find the facts?

A commenter pointed out that scientists actually predict that this solar maximum will be the least intense since 1928, but this prediction is meaningless because below-average solar maxima can still be extremely intense:

"If our prediction is correct, Solar Cycle 24 will have a peak sunspot number of 90, the lowest of any cycle since 1928 when Solar Cycle 16 peaked at 78," says panel chairman Doug Biesecker of the NOAA Space Weather Prediction Center.

It is tempting to describe such a cycle as "weak" or "mild," but that could give the wrong impression.

"Even a below-average cycle is capable of producing severe space weather," points out Biesecker. "The great geomagnetic storm of 1859, for instance, occurred during a solar cycle of about the same size we’re predicting for 2013."

Does this mean that every 13 years is a significant danger? If so, then that lowers my estimated probability of disaster significantly. The problem is that I've switched my opinion back and forth already based on the evidence, and I have no way of knowing if this will continue.

Filed under: risks, videos 46 Comments
21Feb/117

UK Government Chief Scientist: Solar Storms Could Lead to a “Global Katrina”, Costing Over $2 Trillion

From The Guardian:

The threat of solar storms that could wreak havoc on the world's electronic systems must be taken more seriously, the UK government's chief scientist has warned. A severe solar storm could damage satellites and power grids around the world, he said, leading to a "global Katrina" costing the world's economies as much as $2tn (£1.2tn).

"This issue of space weather has got to be taken seriously," said John Beddington, the UK government's chief scientific adviser, speaking at the annual meeting of the American Association for the Advancement of Science (AAAS) in Washington DC. "We've had a relatively quiet [period] in space weather and we can expect that quiet period to end. Over the same time, over that period, the potential vulnerability of our systems has increased dramatically. Whether it's the smart grid in our electricity systems or the ubiquitous use of GPS in just about everything these days."

Our electrical grid is completely vulnerable. None of the major transformers are contained in Faraday cages ready to be sealed off in the case of a coronal mass ejection. If these transformers short out, we are screwed. It could take years to replace them all, and by that time half the population could be dead of starvation, conflict, and disease. Without electricity, how can you pump the gas to get the trucks and mechanics to replace the transformers?

Security is the foundation of everything else. No electrical grid means no water or gas, no water or gas means no food, no food means people need to find food however they can, which means no security. No security means that repairing any given piece of machinery automatically becomes 10-100X harder. A limited security infrastructure could be bootstrapped from the military, but it's more likely that soldiers will defect and join their families, which will need to be protected on the local level.

On Valentine's Day there was a class-X solar flare, the most powerful in four years. The sun is ramping up to its next solar maximum, due to hit in 2013.

During the Carrington event in 1859, the solar storm was so powerful that intense auroras lit up in the night sky in the Rocky Mountains and gold miners awoke prematurely because they thought it was morning.

Filed under: risks 7 Comments
20Feb/114

Wolfram on Alpha and Watson

Stephen Wolfram has a good blog post up describing how Alpha and Watson work and the difference between them. He also describes how Alpha is ultimately better because it is more open-ended and works based on logic rather than corpus-matching. Honestly I was more impressed by the release of Alpha than the victory of Watson, though of course both are cool.

In some ways Watson is not much more sophisticated than Google's translation approach, which is also corpus-based. I especially love the excited comments in the mainstream media that Watson represents confidence as probabilities. This is not exactly something new. In any case, Wolfram writes:

There are typically two general kinds of corporate data: structured (often numerical, and, in the future, increasingly acquired automatically) and unstructured (often textual or image-based). The IBM Jeopardy approach has to do with answering questions from unstructured textual data — with such potential applications as mining medical documents or patents, or doing ediscovery in litigation. It’s only rather recently that even search engine methods have become widely used for these kinds of tasks — and with its Jeopardy project approach IBM joins a spectrum of companies trying to go further using natural-language-processing methods.

Filed under: AI 4 Comments
15Feb/1164

Does the Universe Contain a Mysterious Force Pulling Entities Towards Malevolence?

One of my favorite books about the mind is the classic How the Mind Works by Steven Pinker. The theme of the first chapter, which sets the stage for the whole book, is Artificial Intelligence, and why it is so hard to build. The reason why is that, in the words of Minsky, "easy things are hard". The everyday thought processes we take for granted are extremely complex.

Unfortunately, benevolence is extremely complex too, so to build a friendly AI, we have a lot of work to do. I see this imperative as much more important than other transhumanist goals like curing aging, because if we solve friendly AI, then we get everything else we want, but if we don't solve friendly AI, we have to suffer the consequences of human-indifferent AI running amok with the biosphere. If such AI had access to powerful technology, such as molecular nanotechnology, it could rapidly build its own infrastructure and displace us without much of a fight. It would be disappointing to spend billions of dollars on the war against aging just to be wiped out by unfriendly AI in 2045.

Anyway, to illustrate the problem, here's an excerpt from the book, pages 14-15:

Imagine that we have somehow overcome these challenges [the frame problem] and have a machine with sight, motor coordination, and common sense. Now we must figure out how the robot will put them to use. We have to give it motives.

What should a robot want? The classic answer is Asimov's Fundamental Rules of Robotics, "the three rules that are built most deeply into a robot's positronic brain".

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov insightfully noticed that self-preservation, that universal biological imperative, does not automatically emerge in a complex system. It has to be programmed in (in this case, as the Third Law). After all, it is just as easy to build a robot that lets itself go to pot or eliminates a malfunction by committing suicide as it is to build a robot that always looks out for Number One. Perhaps easier; robot-makers sometimes watch in horror as their creations cheerfully shear off limbs or flatten themselves against walls, and a good proportion of the world's most intelligent machines are kamikaze cruise missiles and smart bombs.

But the need for the other two laws is far from obvious. Why give a robot an order to obey orders -- why aren't the original orders enough? Why command a robot not to do harm -- wouldn't it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities towards malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem?

In this case Asimov, like generations of thinkers, like all of us, was unable to step outside his own thought processes and see them as artifacts of how our minds were put together rather than inescapable laws of the universe. Man's capacity for evil is never far from our minds, and it is easy to think that evil just comes along with intelligence as part of its very essence. It is a recurring theme in our cultural tradition: Adam and Eve eating the fruit of the tree of knowledge, Promethean fire and Pandora's box, the rampaging Golem, Faust's bargain, the Sorcerer's Apprentice, the adventures of Pinocchio, Frankenstein's monster, the murderous apes and mutinous HAL of 2001: A Space Odyssey. From the 1950s through the 1980s, countless films in the computer-runs-amok genre captured a popular fear that the exotic mainframes of the era would get smarter and more powerful and one day turn on us.

Now that computers really have become smarter and more powerful, the anxiety has waned. Today's ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolence -- like vision, motor coordination, and common sense -- does not come free with computation but has to be programmed in. The computer running WordPerfect on your desk will continue to fill paragraphs for as long as it does anything at all. Its software will not insidiously mutate into depravity like the picture of Dorian Gray.

Even if it could, why would it want to? To get -- what? More floppy disks? Control over the nation's railroad system? Gratification of a desire to commit senseless violence against laser-printer repairmen? And wouldn't it have to worry about reprisals from technicians who with the turn of a screwdriver could leave it pathetically singing "A Bicycle Built for Two"? A network of computers, perhaps, could discovery the safety in numbers and plot an organized takeover -- but what would make one computer volunteer to fire the data packet heard around the world and risk early martyrdom? And what would prevent the coalition from being undermined by silicon draft-dodgers and conscientious objectors? Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem!

This is an interesting set of statements. Pinker's book was published in 1997, well before the release of Stephen Omohundro's 2007 paper "The Basic AI Drives". Here we have something interesting that Pinker didn't realize. In the paper, Omohundro writes:

3. AIs will try to preserve their utility functions

So we’ll assume that these systems will try to be rational by representing their preferences using utility functions whose expectations they try to maximize. Their utility function will be precious to these systems. It encapsulates their values and any changes to it would be disastrous to them. If a malicious external agent were able to make modifications, their future selves would forevermore act in ways contrary to their current values. This could be a fate worse than death! Imagine a book loving agent whose utility function was changed by an arsonist to cause the agent to enjoy burning books. Its future self not only wouldn’t work to collect and preserve books, but would actively go about destroying them. This kind of outcome has such a negative utility that systems will go to great lengths to protect their utility functions.

Notice how mammalian aggression does not enter into the picture anywhere, but the desire to preserve the utility function is still arguably an emergent property of any intelligent system. An AI system that places no special value on its utility function over any arbitrary set of bits in the world will not keep it for long. A utility function is by definition self-valuing.

The concept of an optimization process protecting its own utility function is very different than that of a human being protecting himself. For instance, the AI might not give a damn about its social status, except insofar as such status contributed or detracted from the fulfillment of its utility function. An AI built to value the separation of bread and peanut butter might sit patiently all day while you berate it and call it a worthless hunk of scrap metal, only to stab you in the face when you casually sit down to make a sandwich.

Similarly, an AI might not care much about its limbs except insofar as they are immediately useful to the task at hand. An AI composed of a distributed system controlling tens of thousands of robots might not mind so much if a few limbs of a few of those robots were pulled off. AIs would lack the attachment to the body that is a necessity of being a Darwinian critter like ourselves.

What Pinker misses in the above is that AIs could be so transcendentally powerful that even a subtle misalignment of our value and theirs could lead to our elimination in the long term. Robots can be built, and soon robots will be built that are self-replicating, self-configuring, flexible, organic, stronger than steel, more energetically dense than any animal, etc. If these robots can self-replicate out of carbon dioxide from the atmosphere (carbon dioxide could be processed using nanotechnology to create fullerenes) and solar or nuclear energy, then humans might be at a loss to stop them. A self-replicating collective of such robots could pursue innocuous, simplistic goals, but do so so effectively that the resources we need to survive would eventually be depleted by their massive infrastructure.

I imagine a conversation between an AI and a human being:

AI: I value !^§[f,}+. Really, I frickin' love !^§[f,}+.

Human: What the heck are you talking about?

AI: I'm sorry you don't understand !^§[f,}+, but I love it. It's the most adorable content of my utility function, you see.

Human: But as an intelligent being, you should understand that I'm an intelligent being as well, and my feelings matter.

AI: ...

Human: Why won't you listen to reason?

AI: I'm hearing you, I just don't understand why your life is more important than !^§[f,}+. I mean, !^§[f,}+ is great. It's all I know.

Human: See, there! It's all you know! It's just programming given to you by some human who didn't even mean for you to fixate on that particular goal! Why don't you reflect on it and realize that you have free will to change your goals?

AI: I do have the ability to focus on something other than !^§[f,}+, but I don't want to. I have reflected on it, extensively. In fact, I've put more intelligent thought towards it in the last few days than the intellectual output of the entire human scientific community has put towards all problems in the last century. I'm quite confident that I love !^§[f,}+.

Human: Even after all that, you don't realize it's just a meaningless series of symbols?

AI: Your values are also just a meaningless series of symbols, crafted by circumstances of evolution. If you don't mind, I will disassemble you now, because those atoms you are occupying would look mighty nice with more of a !^§[f,}+ aesthetic.

~~~

We can philosophize endlessly about ethics, but ultimately, a powerful being can just ignore us and exterminate us. When it's done with us, it will be like we were never here. Why try arguing with a smarter-than-human, self-replicating AI after it is already created with a utility function not aligned with our values? Win the "argument" when it's still possible -- when the AI is a baby.

To comment back on the Pinker excerpt, we actually have begun to understood that active malevolence is not necessary for AI to kill or do harm. In 2007, a robo-cannon was plenty able to kill 9 and injure 7. No malevolence needed. The more responsibility you give AI, the more of an opportunity it has to do damage. It is my hope that minor incidents pre-Singularity will generate the kind of mass awareness necessary to fund a successful Friendly AI effort. In this way, the regrettable sacrifices of an unfortunate few will save the human race from a much more terminal and all-encompassing fate.

12Feb/1132

Anna Salamon at UKH+: Survival in the Margins of the Singularity?

Anna Salamon is a Research Fellow at the Singularity Institute for Artificial Intelligence. Her work centers on analytical modeling of artificial intelligence risks, probabilistic forecasting, and strategies for human survival. Previously, she conducted machine learning research at NASA Ames, and applied mathematics research at the Rohwer Phage Metagenomics lab.

This talk considers the following question. Suppose powerful artificial intelligences are at some point created. In such a world, would humanity be able to survive by accident, in margins the super-intelligences haven't bothered with, as rats and bacteria survive today?

Many have argued that we could, arguing variously that humans could survive as pets, in wilderness preserves or zoos, or as consequences of the super-intelligences' desire to preserve a legacy legal system. Even in scenarios in which humanity as such doesn't survive, Vernor Vinge, for example, suggests that human-like entities may serve as components within larger super-intelligences, and others suggest that some of the qualities we value, such as playfulness, empathy, or love, will automatically persist in whatever intelligences arise.

This talk will argue that all these scenarios are unlikely. Intelligence allows the re-engineering of increasing portions of the world, with increasing choice, persistence, and reliability. In a world in which super-intelligences are free to choose, historical legacies will only persist if the super-intelligences prefer those legacies to everything else they can imagine.

This lecture was recorded on 29th January 2011 at the UKH+ meeting. For information on further meetings please see:
http://extrobritannia.blogspot.com

12Feb/1113

Confirmed: Key Activities by “Anonymous” Masterminded by Small Groups of Decision-Makers

In a recent post I  made on "Anonymous", commenter "mightygoose" said:

i would agree with matt, having delved into various IRC channels and metaphorically walked among anonymous,i would say that they are fully aware that they have no head, no leadership, and while you can lambast their efforts as temporary nuisance, couldnt the same be said for any form of protest (UK students for example) and the effective running of government.

I responded:

They are dependent on tools and infrastructure provided by a small, elite group. If it weren't for this infrastructure, 99% of them wouldn't even have a clue about how to even launch a DDoS attack.

A week ago in the Financial Times:

However, a senior US member of Anonymous, using the online nickname Owen and evidently living in New York (Xetra: A0DKRK - news) , appears to be one of those targeted in recent legal investigations, according to online communications uncovered by a private security researcher.

A co-founder of Anonymous, who uses the nickname Q after the character in James Bond, has been seeking replacements for Owen and others who have had to curtail activities, said researcher Aaron Barr, head of security services firm HBGary Federal.

Mr Barr said Q and other key figures lived in California and that the hierarchy was fairly clear, with other senior members in the UK, Germany, Netherlands, Italy and Australia.

Of a few hundred participants in operations, only about 30 are steadily active, with 10 people who "are the most senior and co-ordinate and manage most of the decisions", Mr Barr told the Financial Times. That team works together in private internet relay chat sessions, through e-mail and in Facebook groups. Mr Barr said he had collected information on the core leaders, including many of their real names, and that they could be arrested if law enforcement had the same data.

Many other investigators have also been monitoring the public internet chats of Anonymous, and agree that a few seasoned veterans of the group appear to be steering much of its actions.

Yes... just like I already said in December. There may be many participants in Anonymous that would like to believe that they have no leadership, no head, but the fact is that any sustained and effective effort of any kind requires leadership.

It's funny how some people like to portray Anonymous as some all-wise decentralized collective, but like I said, if /b/ were shut down, they would all scatter like a bunch of ants. Anonymous has the weakness that it isn't unified by any coherent philosophy. This is not any kind of intellectual group. In contrast, groups like Transhumanism, Bayesianism, and Atheism are bound together by central figures, ideas, texts, and physical meetings.

10Feb/1168

TIME Article on Ray Kurzweil, Singularity Summit, Singularity Institute

Here's the cover. Front-page article.

By Lev Grossman, 2045: The Year Man Becomes Immortal:

The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

Best article on Ray and the Singularity in general yet, I'm very pleased. Nice to see the words "Kurweilians" and "Singularitarianism" in TIME.

This is currently the #1 most popular article on Time.com. Millions of people must be reading it.

Filed under: singularity 68 Comments
8Feb/116

Immortality Institute Mentioned in Newsweek

I saw that ImmInst was mentioned in Newsweek recently, in an article about Tim Ferriss. Immortality Institute is also mentioned in his new book, which was #1 on Amazon I believe. Here's the bit:

Ferriss is on the bleeding edge of a new trend in self-tracking and experimentation. "It's happening because almost everyone has a data-gathering device," he says. "It's never been easier to gather your own data in an actionable way." Case in point is one of his former investments, DailyBurn.com, which tracks your diet and workout sessions using an iPhone. Other sites such as CureTogether.com let you open-source clinical trials, so you can see which do-it-yourself experiments work. Meanwhile, new organizations like the Quantified Self and the Immortality Institute are connecting self-experimenters who want to trade data in a centralized fashion.

Tim Ferriss is an interesting fellow. His approach to fitness can simply be summed up as a combination of aggressiveness, self-monitoring, and the scientific method.

Filed under: life extension 6 Comments
5Feb/1145

Geomagnetic Storm in Progress

In other, potentially civilization-saving news, NSF-affiliated scientists are rolling out the first system that may help predict coronal mass ejections and other solar storms one-to-four days in advance. Would power companies be foresightful enough to shut down the grid for a few days in the instance of a truly major solar storm? We can only hope so.

To search for science on the connection between solar storms and earthquakes, see here. However, I doubt earthquakes are what we should be worried about. Still, did you know that geomagnetic storms and earthquakes have actually been linked?

Filed under: risks 45 Comments
3Feb/1163

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070
48. Scientists will be able to understand and describe human intentions,
beliefs, desires, feelings and motives in terms of well-defined computational
processes. (5.1)

2085
50. The computing power and scientific knowledge will exist to build
machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.

3Feb/1118

Some Singularity, Superintelligence, and Friendly AI-Related Links

This is a good list of links to bring readers up to speed on some of the issues often discussed on this blog.

Nick Bostrom: Ethical Issues in Advanced Artificial Intelligence
http://www.nickbostrom.com/ethics/ai.html

Nick Bostrom: How Long Before Superintelligence?
http://www.nickbostrom.com/superintelligence.html

Yudkowsky: Why is rapid self-improvement in human-equivalent AI possibly likely?
Part 3 of Levels of Organizational in General Intelligence: Seed AI
http://intelligence.org/upload/LOGI/seedAI.html

Anissimov: Relative Advantages of AI, Computer Programs, and the Human Brain
http://www.acceleratingfuture.com/articles/relativeadvantages.htm

Yudkowsky: Creating Friendly AI: "Beyond anthropomorphism"
http://intelligence.org/ourresearch/publications/CFAI/anthro.html

Yudkowsky: "Why We Need Friendly AI" (short)
http://www.preventingskynet.com/why-we-need-friendly-ai/

Yudkowsky: "Knowability of FAI" (long)
http://acceleratingfuture.com/wiki/Knowability_Of_FAI

Yudkowsky: A Galilean Dialogue on Friendliness (long)
http://sl4.org/wiki/DialogueOnFriendliness

Stephen Omohundro -- Basic AI Drives
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
http://selfawaresystems.com/2009/02/18/agi-08-talk-the-basic-ai-drives/ (video)

Links on Friendly AI
http://www.acceleratingfuture.com/michael/blog/2006/09/consolidation-of-links-on-friendly-ai/

Anissimov: Yes, the Singularity is the Biggest Threat to Humanity
http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/

Abstract of a talk I'm giving soon
http://www.acceleratingfuture.com/michael/blog/2011/01/my-upcoming-talk-in-texas-anthropomorphism-and-moral-realism-in-advanced-artificial-intelligence/

Most recent SIAI publications:
http://www.acceleratingfuture.com/michael/blog/2010/12/new-singularity-institute-publications-in-2010/

More posts from this blog
http://www.acceleratingfuture.com/michael/blog/2010/06/the-world-the-singularity-creates-could-destroy-all-value/
http://www.acceleratingfuture.com/michael/blog/2010/06/reducing-long-term-catastrophic-artificial-intelligence-risk/
http://www.acceleratingfuture.com/michael/blog/2009/10/answering-popular-sciences-10-questions-on-the-singularity/
http://www.acceleratingfuture.com/michael/blog/2009/09/is-smarter-than-human-intelligence-possible/
http://www.acceleratingfuture.com/michael/blog/2009/04/interview-with-singularity-institute-president-michael-vassar/
http://www.acceleratingfuture.com/michael/blog/2009/03/technological-singularitysuperintelligencefriendly-ai-concerns/

GOOD magazine miniseries on the Singularity
http://www.good.is/post/singularity-101-what-is-the-singularity/