Interviewed by The Rational Future

Here’s a writeup.

Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:

-What is the Singularity? -Is there a substantial chance we will significantly enhance human intelligence by 2050? -Is there a substantial chance we will create human-level AI before 2050? -If human-level AI is created, is there a good chance vastly superhuman AI will follow via an “intelligence explosion”? -Is acceleration of technological trends required for a Singularity? – Moore’s Law (hardware trajectories), AI research progressing faster? -What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??) -Does AI need to be conscious or have human like “intentionality” in order to achieve a Singularity? -What are the potential benefits and risks of the Singularity?

Read More

Michio Kaku on 2013 Solar Maximum: “It Would Paralyze the Planet Earth”

Maybe it’s nothing at all! Maybe. Still, I have enough room in my thoughts to consider this, even if the probability is low. I don’t think anyone has the expertise to say for sure one way or the other.

A real analysis would involve probability distributions over solar energy flux and expensive tests on electronic equipment.

This is a good test case for our reasoning on global risk probabilities — are we quick to make unqualified judgments, or are we willing to spend the time to find the facts?

A commenter pointed out that scientists actually predict that this solar maximum will be the least intense since 1928, but this prediction is meaningless because below-average solar maxima can still be extremely intense:

“If our prediction is correct, Solar Cycle 24 will have a peak sunspot number of 90, the lowest of any cycle since 1928 when Solar Cycle 16 peaked at 78,” says panel chairman Doug Biesecker of the NOAA Space Weather Prediction Center.

It is tempting to describe such a cycle as “weak” or “mild,” but that …

Read More

Anna Salamon at UKH+: Survival in the Margins of the Singularity?

Anna Salamon is a Research Fellow at the Singularity Institute for Artificial Intelligence. Her work centers on analytical modeling of artificial intelligence risks, probabilistic forecasting, and strategies for human survival. Previously, she conducted machine learning research at NASA Ames, and applied mathematics research at the Rohwer Phage Metagenomics lab.

This talk considers the following question. Suppose powerful artificial intelligences are at some point created. In such a world, would humanity be able to survive by accident, in margins the super-intelligences haven’t bothered with, as rats and bacteria survive today?

Many have argued that we could, arguing variously that humans could survive as pets, in wilderness preserves or zoos, or as consequences of the super-intelligences’ desire to preserve a legacy legal system. Even in scenarios in which humanity as such doesn’t survive, Vernor Vinge, for example, suggests that human-like entities may serve as components within larger super-intelligences, and others suggest that some of the qualities we value, such as playfulness, empathy, or love, will automatically persist in whatever intelligences arise.

This talk will argue that all these scenarios are unlikely. …

Read More

Josh Tenenbaum Video Again: Bayesian Models of Human Inductive Learning

I posted this only a month ago, but here’s the link to the video again. People sometimes say there’s been no progress in AI, but the kind of results obtained by Tenenbaum are amazing and open up a whole approach to AI that uses fast and frugal heuristics for reasoning and requires very minimal inspiration from the human brain.

Abstract:

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations — far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people’s everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called “intuitive theories” or “schemas”. For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured …

Read More

Some Singularity Summit 2010 Videos Now Online

Tooby, Goertzel, Yudkowsky & Legg panel: Narrow and General Intelligence from Singularity Institute on Vimeo.

We’re starting to upload videos from Singularity Summit 2010 which have been edited. Greg Stock and Ramez Naam are currently being edited and should be uploaded soon, then as many others as we can. Hopefully all should be uploaded in the next two weeks. Singularity Hub has additional coverage.

This was a great conference, and as always my favorite conference to attend. I really think our conference is the best — we include scientists working on the most groundbreaking work in converging technologies and human enhancement. Pretty soon we’ll begin planning for 2011′s Singularity Summit. It’s a pleasure to work on a conference that actually means something for the future of humanity. The way the world looks in 2100 very much depends on how we make crucial choices in the first half of this century.

Read More

Skype Co-Founder Jaan Tallinn on His Life: “Soviets and the Singularity”

Jaan Tallinn From Soviet to Singularity from Aaltoes on Vimeo.

Jaan Tallinn tells us a story about his life, and how he’s come to see the Singularity as an important challenge facing mankind.

The video was posted at the Aalto Entrepreneur Society website. Jaan also recently did an interview with the 2nd biggest newspaper in Estonia, here’s Google’s English translation.

Read More