Michael Vassar Speaks to Yale Students on the Singularity

Coverage from Yale Daily News:

Twenty to 60 years from now, the advent of computers with above-human intelligence could transform civilization as we know it, according to Michael Vassar, president of the Singularity Institute for Artificial Intelligence. In a talk with around 35 students and faculty members in William L. Harkness Hall on Sunday, Vassar expounded the vision that his institute, featured in a Feb. 10 article in TIME Magazine, is working to make a reality. Known as the “singularity,” this futuristic scenario posits that artificial intelligence will surpass human intelligence within the next half-century. Once super-intelligent computers exist, they could generate even more intelligent and sophisticated machines, to the extent that humans would lose all control over the future, Vassar said.

“For the most important event in the history of events, it really should get a fair amount of buzz,” he said.

Vassar compared human and chimpanzee intelligence to argue that small changes in a system can represent large leaps in mental capacity. Just as a human is a small …

Read More

Michio Kaku on 2013 Solar Maximum: “It Would Paralyze the Planet Earth”

Maybe it’s nothing at all! Maybe. Still, I have enough room in my thoughts to consider this, even if the probability is low. I don’t think anyone has the expertise to say for sure one way or the other.

A real analysis would involve probability distributions over solar energy flux and expensive tests on electronic equipment.

This is a good test case for our reasoning on global risk probabilities — are we quick to make unqualified judgments, or are we willing to spend the time to find the facts?

A commenter pointed out that scientists actually predict that this solar maximum will be the least intense since 1928, but this prediction is meaningless because below-average solar maxima can still be extremely intense:

“If our prediction is correct, Solar Cycle 24 will have a peak sunspot number of 90, the lowest of any cycle since 1928 when Solar Cycle 16 peaked at 78,” says panel chairman Doug Biesecker of the NOAA Space Weather Prediction Center.

It is tempting to describe such a cycle as “weak” or “mild,” but that …

Read More

UK Government Chief Scientist: Solar Storms Could Lead to a “Global Katrina”, Costing Over $2 Trillion

From The Guardian:

The threat of solar storms that could wreak havoc on the world’s electronic systems must be taken more seriously, the UK government’s chief scientist has warned. A severe solar storm could damage satellites and power grids around the world, he said, leading to a “global Katrina” costing the world’s economies as much as $2tn (£1.2tn).

“This issue of space weather has got to be taken seriously,” said John Beddington, the UK government’s chief scientific adviser, speaking at the annual meeting of the American Association for the Advancement of Science (AAAS) in Washington DC. “We’ve had a relatively quiet [period] in space weather and we can expect that quiet period to end. Over the same time, over that period, the potential vulnerability of our systems has increased dramatically. Whether it’s the smart grid in our electricity systems or the ubiquitous use of GPS in just about everything these days.”

Our electrical grid is completely vulnerable. None of the major transformers are contained in Faraday cages ready to be sealed off in …

Read More

Wolfram on Alpha and Watson

Stephen Wolfram has a good blog post up describing how Alpha and Watson work and the difference between them. He also describes how Alpha is ultimately better because it is more open-ended and works based on logic rather than corpus-matching. Honestly I was more impressed by the release of Alpha than the victory of Watson, though of course both are cool.

In some ways Watson is not much more sophisticated than Google’s translation approach, which is also corpus-based. I especially love the excited comments in the mainstream media that Watson represents confidence as probabilities. This is not exactly something new. In any case, Wolfram writes:

There are typically two general kinds of corporate data: structured (often numerical, and, in the future, increasingly acquired automatically) and unstructured (often textual or image-based). The IBM Jeopardy approach has to do with answering questions from unstructured textual data — with such potential applications as mining medical documents or patents, or doing ediscovery in litigation. It’s only rather recently that even search engine methods have become …

Read More

Does the Universe Contain a Mysterious Force Pulling Entities Towards Malevolence?

One of my favorite books about the mind is the classic How the Mind Works by Steven Pinker. The theme of the first chapter, which sets the stage for the whole book, is Artificial Intelligence, and why it is so hard to build. The reason why is that, in the words of Minsky, “easy things are hard”. The everyday thought processes we take for granted are extremely complex.

Unfortunately, benevolence is extremely complex too, so to build a friendly AI, we have a lot of work to do. I see this imperative as much more important than other transhumanist goals like curing aging, because if we solve friendly AI, then we get everything else we want, but if we don’t solve friendly AI, we have to suffer the consequences of human-indifferent AI running amok with the biosphere. If such AI had access to powerful technology, such as molecular nanotechnology, it could rapidly build its …

Read More

Anna Salamon at UKH+: Survival in the Margins of the Singularity?

Anna Salamon is a Research Fellow at the Singularity Institute for Artificial Intelligence. Her work centers on analytical modeling of artificial intelligence risks, probabilistic forecasting, and strategies for human survival. Previously, she conducted machine learning research at NASA Ames, and applied mathematics research at the Rohwer Phage Metagenomics lab.

This talk considers the following question. Suppose powerful artificial intelligences are at some point created. In such a world, would humanity be able to survive by accident, in margins the super-intelligences haven’t bothered with, as rats and bacteria survive today?

Many have argued that we could, arguing variously that humans could survive as pets, in wilderness preserves or zoos, or as consequences of the super-intelligences’ desire to preserve a legacy legal system. Even in scenarios in which humanity as such doesn’t survive, Vernor Vinge, for example, suggests that human-like entities may serve as components within larger super-intelligences, and others suggest that some of the qualities we value, such as playfulness, empathy, or love, will automatically persist in whatever intelligences arise.

This talk will argue that all these scenarios are unlikely. …

Read More

Confirmed: Key Activities by “Anonymous” Masterminded by Small Groups of Decision-Makers

In a recent post I  made on “Anonymous”, commenter “mightygoose” said:

i would agree with matt, having delved into various IRC channels and metaphorically walked among anonymous,i would say that they are fully aware that they have no head, no leadership, and while you can lambast their efforts as temporary nuisance, couldnt the same be said for any form of protest (UK students for example) and the effective running of government.

I responded:

They are dependent on tools and infrastructure provided by a small, elite group. If it weren’t for this infrastructure, 99% of them wouldn’t even have a clue about how to even launch a DDoS attack.

A week ago in the Financial Times:

However, a senior US member of Anonymous, using the online nickname Owen and evidently living in New York (Xetra: A0DKRK – news) , appears to be one of those targeted in recent legal investigations, according to online communications uncovered by a private security researcher.

A co-founder of Anonymous, who uses the nickname Q after the character in …

Read More

TIME Article on Ray Kurzweil, Singularity Summit, Singularity Institute

Here’s the cover. Front-page article.

By Lev Grossman, 2045: The Year Man Becomes Immortal:

The Singularity isn’t just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to …

Read More

Immortality Institute Mentioned in Newsweek

I saw that ImmInst was mentioned in Newsweek recently, in an article about Tim Ferriss. Immortality Institute is also mentioned in his new book, which was #1 on Amazon I believe. Here’s the bit:

Ferriss is on the bleeding edge of a new trend in self-tracking and experimentation. “It’s happening because almost everyone has a data-gathering device,” he says. “It’s never been easier to gather your own data in an actionable way.” Case in point is one of his former investments, DailyBurn.com, which tracks your diet and workout sessions using an iPhone. Other sites such as CureTogether.com let you open-source clinical trials, so you can see which do-it-yourself experiments work. Meanwhile, new organizations like the Quantified Self and the Immortality Institute are connecting self-experimenters who want to trade data in a centralized fashion.

Tim Ferriss is an interesting fellow. His approach to fitness can simply be summed up as a combination of aggressiveness, self-monitoring, and the scientific method.

Read More

Geomagnetic Storm in Progress

In other, potentially civilization-saving news, NSF-affiliated scientists are rolling out the first system that may help predict coronal mass ejections and other solar storms one-to-four days in advance. Would power companies be foresightful enough to shut down the grid for a few days in the instance of a truly major solar storm? We can only hope so.

To search for science on the connection between solar storms and earthquakes, see here. However, I doubt earthquakes are what we should be worried about. Still, did you know that geomagnetic storms and earthquakes have actually been linked?

Read More

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070 48. Scientists will be able to understand and describe human intentions, beliefs, desires, feelings and motives in terms of well-defined computational processes. (5.1)

2085 50. The computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.

Read More

Some Singularity, Superintelligence, and Friendly AI-Related Links

This is a good list of links to bring readers up to speed on some of the issues often discussed on this blog.

Nick Bostrom: Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com/ethics/ai.html

Nick Bostrom: How Long Before Superintelligence? http://www.nickbostrom.com/superintelligence.html

Yudkowsky: Why is rapid self-improvement in human-equivalent AI possibly likely? Part 3 of Levels of Organizational in General Intelligence: Seed AI http://intelligence.org/upload/LOGI/seedAI.html

Anissimov: Relative Advantages of AI, Computer Programs, and the Human Brain http://www.acceleratingfuture.com/articles/relativeadvantages.htm

Yudkowsky: Creating Friendly AI: “Beyond anthropomorphism” http://intelligence.org/ourresearch/publications/CFAI/anthro.html

Yudkowsky: “Why We Need Friendly AI” (short) http://www.preventingskynet.com/why-we-need-friendly-ai/

Yudkowsky: “Knowability of FAI” (long) http://acceleratingfuture.com/wiki/Knowability_Of_FAI

Yudkowsky: A Galilean Dialogue on Friendliness (long) http://sl4.org/wiki/DialogueOnFriendliness

Stephen Omohundro — Basic AI Drives http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ http://selfawaresystems.com/2009/02/18/agi-08-talk-the-basic-ai-drives/ (video)

Links on Friendly AI http://www.acceleratingfuture.com/michael/blog/2006/09/consolidation-of-links-on-friendly-ai/

Anissimov: Yes, the Singularity is the Biggest Threat to Humanity http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/

Abstract …

Read More