Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


The Uncertain Future: Now in Beta

A webapp that I worked on with Steve Rayhawk, Anna Salamon, Tom McCabe, and Rolf Nelson, during the Singularity Institute for Artificial Intelligence Summer 2008 Research Program, with helpful discussions with a few others, is now in beta and ready for public announcement. It is called The Uncertain Future.

The Uncertain Future represents a new kind of futurism -- futurism with heavy-tailed, high-dimension probability distributions. In fact, that's the name of the paper presented at the European Conference on Computing and Philosophy that unveiled the project: "Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions".

Most futurism is about telling a story -- more like marketing than an honest attempt at uncovering the possible range of what the future may hold. Better than creating a single story is scenario building -- but this falls short as well. Scenario building is human nature, but it leaves us susceptible to anchoring effects where we overestimate the probability of vivid scenarios. To quote "Cognitive Biases Potentially Affecting Judgment of Global Risks", page 6:

The conjunction fallacy similarly applies to futurological forecasts. Two independent sets of professional analysts at the Second International Congress on Forecasting were asked to rate, respectively, the probability of "A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983" or "A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983". The second set of analysts responded with significantly higher probabilities. (Tversky and Kahneman 1983.)

The conjunction fallacy means that people overestimate the probability of vivid, detailed scenarios even though each additional detail necessarily decreases the probability that the event will occur.s

To combat against the conjunction fallacy and storytelling fallacies in our particular area of futurism, which includes intelligence enhancement, AI, and global catastrophic risk, we created an interactive system that allows the user to input their own probability distributions for different variables potentially associated with the future of AI and humanity, including a probability distribution of how much computing power would required to create human-level AI, a probability distribution for the likelihood of global thermonuclear war in the next century, and many other variables. Our toy model includes variables for the creation of AI, the possible success of intelligence amplification technology, and the potential extinction of the human species by technological mishap before either of these occurs.

Our system is built on the assumption that breaking down a challenging prediction task into its constituent parts can be beneficial because it forces us to think about the task in greater detail and avoid obvious biases associated with specific scenarios we may be anchoring on. Some people may criticize such a view for being excessively reductionist, but many prediction tasks really can be broken down into component pieces. The alternative is making "expert" guesses based on a holistic evaluation of the prediction task, which leaves us open to many well-documented biases.

Here is the opening blurb for the webapp, by Tom McCabe:

The Uncertain Future is a future technology and world-modeling project by the Singularity Institute for Artificial Intelligence. Its goal is to allow those interested in future technology to form their own rigorous, mathematically consistent model of how the development of advanced technologies will affect the evolution of civilization over the next hundred years. To facilitate this, we have gathered data on what experts think is going to happen, in such fields as semiconductor development, biotechnology, global security, Artificial Intelligence and neuroscience. We invite you, the user, to read about the opinions of these experts, and then come to your own conclusion about the likely destiny of mankind.

Interested? It's not perfect, but we think that our system might be a seed for looking at futurism in a different way -- providing an alternative to storytelling and scenario building. This sort of "probabilistic futurism" encourages would-be seers to widen their confidence bounds when confronted with uncertainty, instead of irrationally making overconfident guesses to seem like "experts". The particular issues we focus on are controversial -- human-equivalent AI, biotechnology used to select gametes with genes associated with intelligence, the probability of planet-ending catastrophe -- but we chose these issues specifically because there is disagreement about what degree of uncertainty is warranted from our present position is evaluating these scenarios.

We visualize this tool being used among futurists to specify their quantitative background assumptions regarding the technologies discussed. This might be used to clear aside straw men and zoom in on the core disagreements. It might also be used to evaluate the degree to which respective futurists have considered the technological prerequisites and other assumptions underlying their scenarios.

Go ahead and try the quiz now. Take it slowly, thinking carefully about each question. Scroll down to see predictions from experts, and, where applicable, you can click a button to load a probability distribution that I estimated to be roughly associated with the quote we provided. After taking a look at what the experts say, think about your own position on the issues, and input a probability distribution accordingly.

If you like the system or find it useful, be sure to post a link to it on Facebook or suggest it to your friends. The system still has quite a few bugs; we used Java applets for the probability distributions, and designed it so that the Java applet makes calls to the surrounding HTML which may fail on some combinations of OS and browser. If you use a Mac, you should use Safari, and if you use Linux/Windows, use Opera or Firefox.

Filed under: AI, futurism Leave a comment
Comments (5) Trackbacks (0)
  1. Congrats, a nice little app.

    One thing if intelligence (scientific progress) is being enhanced by other means, then besides speeding up research on AGI and other tech it then it would also be moving the bar on what “human equivalent AI is”.

    If bioenhanced brains have an average of 1000 IQ then the human equivalent AI has a higher target. That only shifts things out a few years though, but something to include for a more accurate projection.

    I think the biggest impactor/accelerator is for proto-AGI company/products to gain significant markets and revenue and income.

    If say novamente becomes as big as EA games or Google then they would have a lot more money to fund AGI research. It would also show that they are developing valuable things that are superior to other methods.

  2. Thanks for sharing that sound advice.

  3. The Uncertain Future of Good Coffee of issues that may well render our lattes and capachinos a very expensive indulgence in the future.
    Group Transportation Services

  4. I very a lot impressive to learn this post. Nice informative, I’ll go to bookmarking this.

  5. Blogging ain’t easy but you make it look that way!

Leave a comment

No trackbacks yet.