Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

14Dec/092

Ray Solomonoff on Speed of AI Takeoff

In 1985, Ray Solomonoff offered his thoughts on six milestones in AI and the economic and technological growth that might be expected when generally intelligent AI is developed. The paper is called "The Time Scale of Artificial Intelligence: Reflections on Social E ffects".

Here is the abstract:

Six future milestones in AI are discussed. These range from the development of a very general theory of problem solving to the creation of machines with capacities well beyond those of a single human. Estimates are made for when these milestones will occur, followed by some suggestions for the more e ffective utilization of the extremely rapid technological growth that is expected.

When I read lines like that last sentence, what I see nowadays is "extremely scary technological growth". Rapid growth is scary when that growth is controlled by systems that may not optimize reality in ways that we explicitly value. (See "The Future of Human Evolution" for an explanation.)

A select milestone:

Milestone C. A critical point in AI development would be a machine that could usefully work on the problem of self-improvement. Newell and Simon were not successful in their attempts to get their "General Problem Solver" to improve it's own methods of operation. While Lenat's "Eurisko" has been successful in several problem areas, he has not been able to get it to devise good heuristics for itself. He is, however, optimistic about the progress that has been made and is continuing this work.

Eurisko eventually led to the creation of Cyc, which appears to be of limited use.

It should be noted that AI "self-improvement" should be viewed as a special case of an AI's general talents for understanding an object, evaluating its purpose, and improving it with respect to that purpose. (Sometimes people make unwarranted distinctions between an AI modifying itself and modifying the world.)

How about some more milestones:

Milestone D. Another milestone will be a computer that can read almost any English text and incorporate most of the material into its data base just as a human does. It would have to store the information in a form that is useful for solving whatever kinds of problems it is normally given.
Since there is an enormous amount of information available in electronic data bases all over the world, a machine with useful access to this information could grow very rapidly in its ability to solve problems and in a real sense in its understanding of the world.

Milestone E will be a machine that has a general problem solving capacity near that of a human, in the areas for which it has been designed -- presumably in mathematics, science and industrial applications.

Milestone F will be a machine with a capacity near that of the computer science community.

Milestone G will be a machine with a capacity many times that of the computer science community.

Here's another bit from later on, analyzing the potential impact of Milestone G:

The last 100 years have seen the introduction of special and general relativity, automobiles, airplanes, quantum mechanics, large rockets and space travel, fission power, fusion bombs, lasers, and large digital computers. Any one of these might take a person years to appreciate and understand. Suppose that they had all been presented to mankind in a single year! This is the magnitude of "future shock" that we can expect from our AI expanded scientific community.

Scanning over the paper, it still seems like Solomonoff is thinking of AIs as tools or narrow scientists, rather than general agents with the full range of activity that humans have or beyond. In the end, Solomonoff seems to imply that one of the primary benefits of AI will be to allow us to predict and evaluate the future more effectively. But he points out that we will still have to make ethical choices.

H/t to Shane Legg for writing about the paper.

Filed under: AI 2 Comments
13Dec/094

Ray Solomonoff, 1926-2009

Ray Solomonoff, the father of algorithmic probability theory and one of the founding fathers of Artificial Intelligence, died December 7th after a brief illness.

Solomonoff was a pioneer of probabilistic thinking in AI, and in general. It is my own view that the value of probabilistic thinking is the single most important insight about reality that humanity has ever had, and Solomonoff helped add to that great edifice with his idea of Algorithmic Probability.

Solomonoff was the founder of universal inductive inference, which gives a mathematically optimal method of predicting the next bit of sensory information in a sequence based on prior information. (Unfortunately, it is incomputable, though computable approximations have been used throughout the field of AI.) As far as I know, Solomonoff made the first mathematically rigorous attempt at automated sequence prediction.

Solomonoff's work is being carried forward by theorists such as Marcus Hutter, Juergen Schmidhuber, and Shane Legg, among many others.

Just last week I posted on AIXI, which is essentially a marriage between Solomonoff's universal inductive inference and decision theory. Inductive inference tells you what is going to happen next, while decision theory tells you what to do next. Put these together and you get a model for AI.

Solomonoff kept publishing and engaging with the AI community right up until his death. It seems very likely that, if and when strong AI is created, the designer will owe a great debt to Solomonoff's work. Let's honor his memory by becoming more familiar with his achievements and making sure that his ideas stay alive.

Filed under: AI 4 Comments
13Dec/094

Winter Edition of h+ Magazine Available

The Winter edition of h+ magazine is out. There is strong representation from the Singularity Institute among the article contributors, with pieces by myself, Tom McCabe, and Ben Goertzel. Definitely check out McCabe and Goertzel's interesting articles.

On page 12, I talk about Ned Seeman's latest totally-awesome robotic nanomanipulating arm. It places atoms and molecules with 100% accuracy.

The theme of the issue is DIY, which is great. It makes use of leverage. If the goal of h+ magazine is to promote scientific research into human enhancement, then promoting DIY technology is an effective use of their money.

In the DIY realm, I've lately been following 3D fabbing with interest, as have many mainstream news sources.

I notice that the website has a thoughtful article on the significance of 4chan. Jason Louv writes:

Yet what the media has failed to grasp is what 4chan can tell us about where we're headed. The Chans aren't the freak sideshow of the Internet. They are the heart and soul of the Internet. And they are the ones furthest ahead of the pack, leading us.

Yes, yes, yes. This is so true, and barely anyone knows it. On 4chan, people talk with pictures. I believe that the future of human communication will include talking with pictures to a greater degree than we see today.

H+ is being a pioneering website by being one of the first to feature intelligent analysis of the 4chan phenomenon and getting past the "rubbernecking disgust" of the mainstream media. The dominant communications fora of the future will look a lot more like 4chan than like a town hall meeting, but the number of writers available to comment on that compelling and barely-explored future vision is very low.

Filed under: futurism 4 Comments
13Dec/090

Coverage of MIT’s New AI Initiative, the $5M Mind Machine Project

There are articles at the New York Times and Popular Science. Nothing new or exciting really, but still nice to see coverage.

Filed under: AI No Comments
11Dec/095

The Uncertain Future: Now in Beta

A webapp that I worked on with Steve Rayhawk, Anna Salamon, Tom McCabe, and Rolf Nelson, during the Singularity Institute for Artificial Intelligence Summer 2008 Research Program, with helpful discussions with a few others, is now in beta and ready for public announcement. It is called The Uncertain Future.

The Uncertain Future represents a new kind of futurism -- futurism with heavy-tailed, high-dimension probability distributions. In fact, that's the name of the paper presented at the European Conference on Computing and Philosophy that unveiled the project: "Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions".

Most futurism is about telling a story -- more like marketing than an honest attempt at uncovering the possible range of what the future may hold. Better than creating a single story is scenario building -- but this falls short as well. Scenario building is human nature, but it leaves us susceptible to anchoring effects where we overestimate the probability of vivid scenarios. To quote "Cognitive Biases Potentially Affecting Judgment of Global Risks", page 6:

The conjunction fallacy similarly applies to futurological forecasts. Two independent sets of professional analysts at the Second International Congress on Forecasting were asked to rate, respectively, the probability of "A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983" or "A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983". The second set of analysts responded with significantly higher probabilities. (Tversky and Kahneman 1983.)

The conjunction fallacy means that people overestimate the probability of vivid, detailed scenarios even though each additional detail necessarily decreases the probability that the event will occur.s

To combat against the conjunction fallacy and storytelling fallacies in our particular area of futurism, which includes intelligence enhancement, AI, and global catastrophic risk, we created an interactive system that allows the user to input their own probability distributions for different variables potentially associated with the future of AI and humanity, including a probability distribution of how much computing power would required to create human-level AI, a probability distribution for the likelihood of global thermonuclear war in the next century, and many other variables. Our toy model includes variables for the creation of AI, the possible success of intelligence amplification technology, and the potential extinction of the human species by technological mishap before either of these occurs.

Our system is built on the assumption that breaking down a challenging prediction task into its constituent parts can be beneficial because it forces us to think about the task in greater detail and avoid obvious biases associated with specific scenarios we may be anchoring on. Some people may criticize such a view for being excessively reductionist, but many prediction tasks really can be broken down into component pieces. The alternative is making "expert" guesses based on a holistic evaluation of the prediction task, which leaves us open to many well-documented biases.

Here is the opening blurb for the webapp, by Tom McCabe:

The Uncertain Future is a future technology and world-modeling project by the Singularity Institute for Artificial Intelligence. Its goal is to allow those interested in future technology to form their own rigorous, mathematically consistent model of how the development of advanced technologies will affect the evolution of civilization over the next hundred years. To facilitate this, we have gathered data on what experts think is going to happen, in such fields as semiconductor development, biotechnology, global security, Artificial Intelligence and neuroscience. We invite you, the user, to read about the opinions of these experts, and then come to your own conclusion about the likely destiny of mankind.

Interested? It's not perfect, but we think that our system might be a seed for looking at futurism in a different way -- providing an alternative to storytelling and scenario building. This sort of "probabilistic futurism" encourages would-be seers to widen their confidence bounds when confronted with uncertainty, instead of irrationally making overconfident guesses to seem like "experts". The particular issues we focus on are controversial -- human-equivalent AI, biotechnology used to select gametes with genes associated with intelligence, the probability of planet-ending catastrophe -- but we chose these issues specifically because there is disagreement about what degree of uncertainty is warranted from our present position is evaluating these scenarios.

We visualize this tool being used among futurists to specify their quantitative background assumptions regarding the technologies discussed. This might be used to clear aside straw men and zoom in on the core disagreements. It might also be used to evaluate the degree to which respective futurists have considered the technological prerequisites and other assumptions underlying their scenarios.

Go ahead and try the quiz now. Take it slowly, thinking carefully about each question. Scroll down to see predictions from experts, and, where applicable, you can click a button to load a probability distribution that I estimated to be roughly associated with the quote we provided. After taking a look at what the experts say, think about your own position on the issues, and input a probability distribution accordingly.

If you like the system or find it useful, be sure to post a link to it on Facebook or suggest it to your friends. The system still has quite a few bugs; we used Java applets for the probability distributions, and designed it so that the Java applet makes calls to the surrounding HTML which may fail on some combinations of OS and browser. If you use a Mac, you should use Safari, and if you use Linux/Windows, use Opera or Firefox.

Filed under: AI, futurism 5 Comments
9Dec/095

New(-ish) ScienceBlogs Blog Focusing on AI?

2009 saw a lot of mainstreaming of "transhumanist" ideas, foci, and emphases. As I recently pointed out, Foreign Policy magazine gave this phenomenon a nod by including two transhumanists on their list of 100 global thinkers.

I am particularly interested in any possible mainstreaming of AGI and Friendly AI ideas, for obvious reasons. These ideas are not mainstreaming as fast as "wow-tech" like life-extension or cybernetics, so watching for it is even more challenging and interesting. That's why this ad on the ScienceBlogs network caught my eye:

It links to Collective Imagination, a relatively new blog on the ScienceBlogs network with an about page that doesn't mention AI at all. But, click the ad and you go to their front page, which currently is all about AI. On November 19th, their head blogger, Greg Laden, bought into the IBM "cat brain" deliberately deceptive news item, but then did a double-take a week later. What is interesting about his double-take is that he takes the time to point out some ignorant phrasing by IEEE Spectrum blogger Sally Adee in her coverage of the controversy. She said "There are as many theories of mind as there are researchers working on it, and in some cases there is a real grudge match between the theorists." Greg Laden commented:

I would like to point out that the term "Theory of Mind" is used incorrectly in the above quote. To me, this misuse of the term indicates a degree in pop psychology, as one might be exposed to the phrase but not know what it is, as has apparently happened here.

This is a little embarrassing. It would be like a psychologist writing about computer programming and noting that a "hash table" is a good place to put your chopped up corned beef.

It is embarrassing. Kudos to Greg for catching that. Watch out for those Igon Values.

Another, unrelated place where I read about IEEE in the last few days concerned an IEEE blogger having trouble understanding why the molecular nanotechnology community laughs in derision at the word "nanotechnology" being applied to stain-resistant pants. Josh Hall explained why. The same blogger, Dexter Johnson, also recently relayed that the American Chemical Society "touts nanobots as nanotechnology's big impact" in a new promotional video, which is another way of saying that they've been won over by the arguments for the feasibility of MNT. He writes:

The video is fascinating because it manages to move from nanobots and nanofactories to discussions of nanomaterials and buckyballs so seamlessly you would almost think there was no distinction between the two.

From what I gather this Bytesize Science is supposed to be targeting the future chemists of the world by making science fun. I am not sure that incomprehensible goop is really the way to do it, but I’ve never tried to teach children about nanotechnology.

In the post about nanopants, he writes:

I will not argue here (or likely anywhere else) about the feasibility of nanofactories in the visions of the MNT community.

Why not? Maybe because the idea of nanofactories is sometimes considered unscientific?

Filed under: AI, nanotechnology 5 Comments
9Dec/0920

IQ: “Lonely Ice Floe” or Consensus Science?

Malcolm Gladwell calls those who accept the Mainstream Science on Intelligence statement "IQ fundamentalists", but the reality of g and the predictive validity of intelligence tests are widely accepted as consensus science by intelligence researchers, with some caveats. Reading Eurekalert and PhysOrg, I see press releases practically every day that analyze the correlation of intelligence with a variety of genetic and environmental factors. Here's one from yesterday:

Fit teenage boys are smarter
But muscle strength isn't the secret, study shows

In the first study to demonstrate a clear positive association between adolescent fitness and adult cognitive performance, Nancy Pedersen of the University of Southern California and colleagues in Sweden find that better cardiovascular health among teenage boys correlates to higher scores on a range of intelligence tests -- and more education and income later in life.

"During early adolescence and adulthood, the central nervous system displays considerable plasticity," said Pedersen, research professor of psychology at the USC College of Letters, Arts & Sciences. "Yet, the effect of exercise on cognition remains poorly understood."

Pedersen, lead author Maria Åberg of the University of Gothenburg and the research team looked at data for all 1.2 million Swedish men born between 1950 and 1976 who enlisted for mandatory military service at the age of 18.

In every measure of cognitive functioning they analyzed-- from verbal ability to logical performance to geometric perception to mechanical skills -- average test scores increased according to aerobic fitness.

However, scores on intelligence tests did not increase along with muscle strength, the researchers found.

"Positive associations with intelligence scores were restricted to cardiovascular fitness, not muscular strength," Pedersen explained, "supporting the notion that aerobic exercise improved cognition through the circulatory system influencing brain plasticity."

I support the consensus science on intelligence for the sake of promoting truth, but I also must admit that it especially concerns me that the modern denial of the reality of different intelligence levels will cause ethicists and the public to ignore the risks from human-equivalent artificial intelligence. After all, if all human beings are on the same general level of intelligence, plus or minus a few assorted strengths and weaknesses, then it becomes easy to deny that superintelligence is even theoretically possible.

Some people are just more intelligent than others in every possible way. (Though most people have strengths that others don't, such as through learning and talent.) This sounds unfair and politically incorrect, but that's what we see in the data. The modern neo-mystical pseudoscientific folk view of intelligence seems to indicate that if someone seems genuinely more intelligent at first, that intelligence must surely be accompanied by some major flaws, to "balance it out" on the cosmic scale. This may be true sometimes -- for instance, nerds tend to have poorer social skills than average -- but it doesn't always apply. Some people are just better at everything. This sort of talk is often considered forgivable when people mention it casually in real life in relation to a specific circumstance, but for some reason when it is put down in text in general terms, would-be egalitarians try to shoot holes in it with unscientific theories like Gardner's multiple intelligences concept.

Filed under: intelligence, IQ 20 Comments
8Dec/093

CNN Casually Mentions Human Enhancement in a Positive Light

From the "Top 10 Scientific Discoveries of 2009" on CNN.com:

3. Gene therapy cures color blindness

Modern science already offers ways to enhance your mood, sex drive, athletic performance, concentration levels and overall health, but a discovery in September suggests that truly revolutionary human enhancement may soon move from science fiction to reality. A study in Nature reported that a team of ophthalmologists had injected genes that produce color-detecting proteins into the eyes of two color-blind monkeys, allowing the animals to see red and green for the first time. The results were shocking to most — "We said it was possible, but every single person I talked to said, 'Absolutely not,' " said study co-author Jay Neitz of the University of Washington — and raised the possibility that a range of vision defects could someday be cured. That's a transformative prospect in itself, but the discovery further suggests that it may be possible to enhance senses in "healthy" people too, truly revolutionizing the way we see the world.

The only people who bother to object to human enhancement will be the same people who objected to in vitro fertilization in the 70s. Not many.

This may actually be worrisome, because I am concerned about immoral individuals using enhanced abilities to control or intimidate others. A libertarian "hands-off" perspective overlooks the tremendous amount of damage that could be wrought if human enhancement is deployed in an entirely unregulated and uncontrolled fashion. For instance, the Russian mafiosi will use myostatin inhibition drugs to give their goons such bulging muscles that they will inflict even more horrible tortures on their victims.

For human enhancement to magnify human happiness but not human misery will require much improved standards of human rights around the world. Unfortunately, none of the nations that matter will comply. Many of these countries may eventually need to be forced economically into raising their human rights standards, or the defectors in social contracts will have a field day. The trump card of human enhancement will make defection easier than ever.

Filed under: transhumanism 3 Comments
7Dec/0910

Hit and Run: “We can all agree that Ron Bailey defecates better transhumanism coverage than I can ever hope to produce”

Coverage of the recent H+ Summit is available at Reason's Hit and Run blog. Here is a funny bit:

Futurist John Smart is wrapping up the Humanity + Summit by noting that human enhancement believers are too focused on pie-in-the sky visions. Instead of making weird flying-car predictions about the far future, transhumanoids should be pointing to contemporary advances.

John Smart is known for predicting that the Earth will be artificially collapsed into an engineered black hole in an effort to compress matter and energy to more efficiently run uploads. I think he is right.

There is coverage of Aubrey and Todd Huffman's "Rasputin beards". An informal poll found that three out of three women found Todd's finger magnet implant hot.

More reporting:

I don't know enough about transhumanism to say whether the movement is at any kind of crossroads, but I was struck by how modest the claims were at this event -- in addition to all the calls for empathy, which I referred to yesterday. Toe shoes seem useful and ergonomic, but don't these things just beg for a new breed of humans with opposable big toes? If there are transhumanists out there calling for human antennae, wings, pineal gland enhancers and the like, they don't seem to have been in Irvine this weekend.

The organization I'm with, the Singularity Institute, is claiming to have the potential to quickly wipe out poverty and suffering for all humanity for the rest of eternity if we successfully construct a recursively self-improving Friendly AI that embodies our collective volition. (We consider this feasible albeit extremely challenging and a very long-term project, on the scale of decades but not centuries.) A number of professional ethicists and philosophers agree with us on the plausibility of our arguments. Is that extreme enough for you?

Check out the comments section on those blog posts for some illuminating insights and reflections on the conference.

Filed under: transhumanism 10 Comments
7Dec/092

MIT News: “Rethinking Artificial Intelligence”

Here is the article. It contains coverage of Ed Boyden's brain-computer interfacing efforts, along with commentary by Dennett, Pinker, Minsky, Gershenfeld, and Boyden. The important paragraph is here:

The new project, launched with an initial $5 million grant and a five-year timetable, is called the Mind Machine Project, or MMP, a loosely bound collaboration of about two dozen professors, researchers, students and postdocs. According to Neil Gershenfeld, one of the leaders of MMP and director of MIT’s Center for Bits and Atoms, one of the project’s goals is to create intelligent machines — “whatever that means.”

You can read up on the Mind Machine Project here.

Filed under: AI 2 Comments
7Dec/092

The World’s Smallest Snowman — So What?

Various futurist and transhumanists are abuzz about the world's smallest snowman. Just like IBM's recent deliberately misleading "cat brain" announcement, I consider this non-news. As far as I can tell, this doesn't represent any sort of interesting technological advance. Microscale tin beads are not new. Focused ion beams are not new. Ion beam deposited metals are not new. This is just a gimmick.

I am not a nanoscientist. I am just a guy who reads news feeds like Nanowerk/CRN/Foresight and skims papers once in a while. But the way that the transhumanist and futurist community is reacting to this at all makes me roll my eyes. The majority of futurists lack scientific knowledge of any depth because they are too busy flying around, attending meetings, giving interviews, and running scenario sessions. Paying someone to sit around and read papers is not a common practice outside of academia.

Some portions of the press release are especially banal:

The snowman is mounted on a silicon cantilever from an atomic force microscope whose sharp tip 'feels' surfaces creating topographic surveys at almost atomic scales.

An atomic force microscopic that 'feels' surfaces at "almost atomic scales"..? Wow! This would be interesting if AFMs hadn't been around since, oh, 1986. However, public knowledge of nanotechnology is so laughably abysmal that this can be passed off as news. I would understand the Dawkins/Digg/Reddit crowd saying "wow" to this, but I would hope that transhumanists, who presumably have spent some time investigating nanotechnology, would understand that this is just a publicity demonstration with no scientific value. Do they?

Filed under: nanotechnology 2 Comments
7Dec/09126

Computable AIXI — Should We Be Afraid?

An interesting point of dispute in the field of Artificial General Intelligence concerns the relevance/irrelevance of optimal formal models of inference to creating computationally feasible AI. On one side we have figures like Marcus Hutter and Jürgen Schmidhuber, the creators of the formal models AIXI and the Gödel machine respectively. What is AIXI? From the source:

Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible.

What is a Gödel machine?

We present the first class of mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers. Inspired by Kurt Gödel's celebrated self-referential formulas (1931), a Gödel machine (or `Goedel machine' but not `Godel machine') rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable self-rewrite. We show that such a self-rewrite is globally optimal - no local maxima! -- since the code first had to prove that it is not useful to continue the proof search for alternative self-rewrites. Unlike previous non-self-referential methods based on hardwired proof searchers, ours not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()-notation, provided the utility of such speed-ups is provable at all.

"Fancy language", you might be thinking, but what does it mean? Basically, Hutter and Schmidhuber have created interesting mathematical models for certain types of self-modifying intelligent agents. In the extreme case, you can interpret it to mean that AI has already been solved in some sense. The only problem is that both approaches are computationally hungry (especially AIXI) and it remains unclear how much and what type of environmental input and/or cognitive structure would be necessary to create derived systems computable with current hardware. Both Hutter and Schmidhuber appear convinced that their mathematics are excellent starting points to creating computable AI.

On the other "side" (to oversimplify) are researchers like Ben Goertzel who consider theoretically optimal intelligence and computable intelligence to be completely different problems. (See, for instance, his remarks on the subject in The Hidden Pattern.) Others are quiet on the subject, probably largely due to the great degree of uncertainty around the applicability of AIXI and Gödel machines to computable AGI. Certainly, they serve as discussion touchstones for exploring a variety of other issues in AI. As Eliezer Yudkowsky has pointed out, AIXI's "maximize reward channel" supergoal could conceivably have great difficulties in maintaining friendliness towards humans as the agent's power increased. Here is AIXI mentioned in the context of Eliezer giving his "technical definition of Friendliness":

A technical definition of "Friendliness" would be an invariant which you can prove a recursively self-improving optimizer obeys.

This doesn't address the issue of choosing the right invariant, or being able to design an invariant that specifies what you think it specifies, or even having a framework for invariants that won't *automatically* kill you. It might be possible to design a physically realizable, recursively self-improving version of AIXI such that it would stably maintain the invariant of "maximize reward channel". But the AI might alter the "reward channel" to refer to an internal, easily incremented counter, instead of the big green button attached to the AI; and your formal definition of "reward channel" would still match the result. The result would obey the theorem, but you would have proved something unhelpful. Or even if everything worked exactly as Hutter specified in his paper, AIXI would rewrite its future light cone to maximize the probability of keeping the reward channel maximized, with absolutely no other considerations (like human lives) taken into account.

The low-complexity supergoal structure inherent in AIXI puts scaled-down, computable versions at risk for becoming hungry optimizers with low-complexity values. That's why a recent paper, "A Monte Carlo AIXI Approximation" should be of interest to anyone who might one day share a planet with an entity based on or inspired by the AIXI model. The paper, from approximately three months ago, is described as follows:

This paper describes a computationally feasible approximation to the AIXI agent, a universal reinforcement learning agent for arbitrary environments. AIXI is scaled down in two key ways: First, the class of environment models is restricted to all prediction suffix trees of a fixed maximum depth. This allows a Bayesian mixture of environment models to be computed in time proportional to the logarithm of the size of the model class. Secondly, the finite-horizon expectimax search is approximated by an asymptotically convergent Monte Carlo Tree Search technique. This scaled down AIXI agent is empirically shown to be effective on a wide class of toy problem domains, ranging from simple fully observable games to small POMDPs. We explore the limits of this approximate agent and propose a general heuristic framework for scaling this technique to much larger problems.

A desktop implementation of this agent was able to learn how to play Pac-man "somewhat reasonab[ly]" according to Hutter's former student Shane Legg. Check out Shane's blog post for a few comments by Roko and Vladimir Nesov on the work. There is a great amount of disagreement in the community about whether publicizing this kind of research is a good thing for humanity or not. Personally, I agree with both Roko and Vladimir's comments: it is both scary, and a natural thing to do once you have AIXI theory.

My hope, and tentative prediction, is that the use of systems like MC-AIXI on toy problems will throw open the doors to the light of moral anti-realism, and more philosophers, computer scientists, and Ray Kurzweil will realize that human-surpassing self-improving AI kills everyone on the planet by default rather than as a special case.

Filed under: AI, singularity 126 Comments