Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

16Apr/1288

Interviewed by The Rational Future

Here's a writeup.

Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:

-What is the Singularity?
-Is there a substantial chance we will significantly enhance human intelligence by 2050?
-Is there a substantial chance we will create human-level AI before 2050?
-If human-level AI is created, is there a good chance vastly superhuman AI will follow via an "intelligence explosion"?
-Is acceleration of technological trends required for a Singularity?
- Moore's Law (hardware trajectories), AI research progressing faster?
-What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??)
-Does AI need to be conscious or have human like "intentionality" in order to achieve a Singularity?
-What are the potential benefits and risks of the Singularity?

5Aug/1138

Eliezer Yudkowsky at the Winter Intelligence Conference at Oxford: “Friendly AI: Why It’s Not That Simple”

Winter Intelligence Conference 2011 - Eliezer Yudkowsky from Future of Humanity Institute on Vimeo.

25Feb/1146

Michio Kaku on 2013 Solar Maximum: “It Would Paralyze the Planet Earth”

Maybe it's nothing at all! Maybe. Still, I have enough room in my thoughts to consider this, even if the probability is low. I don't think anyone has the expertise to say for sure one way or the other.

A real analysis would involve probability distributions over solar energy flux and expensive tests on electronic equipment.

This is a good test case for our reasoning on global risk probabilities -- are we quick to make unqualified judgments, or are we willing to spend the time to find the facts?

A commenter pointed out that scientists actually predict that this solar maximum will be the least intense since 1928, but this prediction is meaningless because below-average solar maxima can still be extremely intense:

"If our prediction is correct, Solar Cycle 24 will have a peak sunspot number of 90, the lowest of any cycle since 1928 when Solar Cycle 16 peaked at 78," says panel chairman Doug Biesecker of the NOAA Space Weather Prediction Center.

It is tempting to describe such a cycle as "weak" or "mild," but that could give the wrong impression.

"Even a below-average cycle is capable of producing severe space weather," points out Biesecker. "The great geomagnetic storm of 1859, for instance, occurred during a solar cycle of about the same size we’re predicting for 2013."

Does this mean that every 13 years is a significant danger? If so, then that lowers my estimated probability of disaster significantly. The problem is that I've switched my opinion back and forth already based on the evidence, and I have no way of knowing if this will continue.

Filed under: risks, videos 46 Comments
12Feb/1132

Anna Salamon at UKH+: Survival in the Margins of the Singularity?

Anna Salamon is a Research Fellow at the Singularity Institute for Artificial Intelligence. Her work centers on analytical modeling of artificial intelligence risks, probabilistic forecasting, and strategies for human survival. Previously, she conducted machine learning research at NASA Ames, and applied mathematics research at the Rohwer Phage Metagenomics lab.

This talk considers the following question. Suppose powerful artificial intelligences are at some point created. In such a world, would humanity be able to survive by accident, in margins the super-intelligences haven't bothered with, as rats and bacteria survive today?

Many have argued that we could, arguing variously that humans could survive as pets, in wilderness preserves or zoos, or as consequences of the super-intelligences' desire to preserve a legacy legal system. Even in scenarios in which humanity as such doesn't survive, Vernor Vinge, for example, suggests that human-like entities may serve as components within larger super-intelligences, and others suggest that some of the qualities we value, such as playfulness, empathy, or love, will automatically persist in whatever intelligences arise.

This talk will argue that all these scenarios are unlikely. Intelligence allows the re-engineering of increasing portions of the world, with increasing choice, persistence, and reliability. In a world in which super-intelligences are free to choose, historical legacies will only persist if the super-intelligences prefer those legacies to everything else they can imagine.

This lecture was recorded on 29th January 2011 at the UKH+ meeting. For information on further meetings please see:
http://extrobritannia.blogspot.com

11Jan/114

Josh Tenenbaum Video Again: Bayesian Models of Human Inductive Learning

I posted this only a month ago, but here's the link to the video again. People sometimes say there's been no progress in AI, but the kind of results obtained by Tenenbaum are amazing and open up a whole approach to AI that uses fast and frugal heuristics for reasoning and requires very minimal inspiration from the human brain.

Abstract:

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations -- far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people's everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called "intuitive theories" or "schemas". For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases -- critical for generalization from very few examples -- with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have been pre-programmed to perform in. The models I discuss will connect to several directions in contemporary machine learning, such as semi-supervised learning, structure learning in graphical models, hierarchical Bayesian modeling, and nonparametric Bayes.

Filed under: AI, science, videos 4 Comments
26Dec/1012

Open Ecology Video

Global Village Construction Set in 2 Minutes from Marcin Jakubowski on Vimeo.

16Dec/103

Singularity Summit 2010 Videos: Shane Legg on Universal Measures of Intelligence

Shane Legg at The Singularity Summit 2010 -- Universal measures of intelligence from Singularity Institute on Vimeo.

This was my favorite talk of the Summit.

Shakey camera... :( Next Summit I'll have to watch the live video feed the entire time to make sure it stays steady.

Filed under: AI, singularity, videos 3 Comments
15Dec/101

Singularity Summit 2010 Videos: Michael Vassar on The Darwinian Method

Michael Vassar at Singularity Summit 2010 -- The Darwinian Method from Singularity Institute on Vimeo.

15Dec/100

Singularity Summit 2010 Videos: Eliezer Yudkowsky on Simplified Humanism and Positive Futurism

Eliezer Yudkowsky at Singularity Summit 2010 -- Simplified Humanism and Positive Futurism from Singularity Institute on Vimeo.

15Dec/107

Some Singularity Summit 2010 Videos Now Online

Tooby, Goertzel, Yudkowsky & Legg panel: Narrow and General Intelligence from Singularity Institute on Vimeo.

We're starting to upload videos from Singularity Summit 2010 which have been edited. Greg Stock and Ramez Naam are currently being edited and should be uploaded soon, then as many others as we can. Hopefully all should be uploaded in the next two weeks. Singularity Hub has additional coverage.

This was a great conference, and as always my favorite conference to attend. I really think our conference is the best -- we include scientists working on the most groundbreaking work in converging technologies and human enhancement. Pretty soon we'll begin planning for 2011's Singularity Summit. It's a pleasure to work on a conference that actually means something for the future of humanity. The way the world looks in 2100 very much depends on how we make crucial choices in the first half of this century.

13Dec/101

Skype Co-Founder Jaan Tallinn on His Life: “Soviets and the Singularity”

Jaan Tallinn From Soviet to Singularity from Aaltoes on Vimeo.

Jaan Tallinn tells us a story about his life, and how he's come to see the Singularity as an important challenge facing mankind.

The video was posted at the Aalto Entrepreneur Society website. Jaan also recently did an interview with the 2nd biggest newspaper in Estonia, here's Google's English translation.

10Dec/1014

Antikythera Mechanism Fully Rebuilt with Legos

Found on Thoughtware.tv.

Filed under: technology, videos 14 Comments