Further Steps Toward an AGI Roadmap

roadmap_03.jpg

At the AGI-09 post-conference workshop‘s Roadmap Panel,  Itamar Arel of the University of Tennessee announced the founding of a wiki at agi-roadmap.org that will serve as a supplement to the creation of an AGI Roadmap.  Taking as examples several previous, related technology projects, J. Storrs Hall made mention of work conducted by the Foresight Institute on the Technology Roadmap for Productive Nanosystems and Ben Goertzel discussed his participation in the writing of the Metaverse Roadmap.

The following transcript of the AGI-09 AGI Roadmap Panel has not been approved by the speakers.  Video is also available.

agi-roadmap_04.png

Ben Goertzel:  I think Tom did a thorough job of reviewing the various human, social, organizational factors that have been affecting, and in some cases impeding, the progress toward general intelligence.  Itamar is pushing for something I think is also valuable, which is the construction of a roadmap for AGI development.

I just wanted to mention a couple other roadmap projects that I have either been involved with or have watched progress. One is a nanotechnology roadmap, which Josh has had a bunch to do with. The other is more recent, which is called the Metaverse Roadmap, which had to do with virtual worlds technology.

I was more involved with the Metaverse Roadmap, although I was not one of the key architects.  What we did there was pool together people from various companies, some academics, and tried to get some sense of what is the end goal.  In that case, with the  Metaverse Roadmap, the end goal is a virtual world that is an analog of the real world, but in a computer.  This would have different towns, cities and countrysides, where anyone can log in and control a human avatar, buy and sell things, teach each other and get married, everything that you can do in regular life, in the virtual world.

They set out in ten, fifteen years to be at one place with the metaverse, and looked at what steps needed to be taken to go from our current technology and the current economics of the virtual worlds industry to that state of a richly featured virtual world. In terms of artificial general intelligence, there is nothing remotely like that for which the AI community agrees on.  I think that would be worthwhile and does not require a consensus on “fundamentally, what is general intelligence?”

An analogy I look at there is biology.  It’s not like biologists start with everyone agreeing on a precise definition of “what is life?”  There are various conceptualizations of life and there is a general, common understanding of it.  Trying to formalize the question may actually be a valuable thing, but I don’t think everyone needs to agree on that to agree that the human genome project is an interesting idea. I do think defining things precisely has a valuable role to play, but I do not think it is a prerequisite. Also, I don’t think it is a prerequisite for everyone to agree on a common architecture or even to agree on common terminology for everything that happens inside the mind.

I think that a group of researchers could agree on a roadmap for development, in spite of having a very healthy disagreement about a lot of other things that are involved in carrying out the roadmap.  What Itamar called “axioms,” I prefer to think of as functional requirements, to use software engineering terms, although I don’t necessarily want to say that the solution is software rather than hardware.  I think if the community could agree on what functional requirements we would like AGI systems to fulfill n years from now, and agree on a series of incremental milestones, that would be very helpful for a number of reasons—one being better coordination of work among different people, another being making the field seem more substantial to funding sources and people in other fields.

I think the roadmap project is an important one.  What I saw from the Metaverse Roadmap is that it was not terribly easy for them to put together.  It was over a couple years through in-person meetings with various key people in the field that they finally hashed out something that they all thought was meaningful.  Can you say something about the Nanotechnology Roadmap project?

J. Storrs Hall:  Sure.  I am the president of the Foresight Institute. Foresight and the Battelle Institute, which runs the national laboratories, collaborated to produce the Technology Roadmap for Productive Nanosystems.  The idea being, unlike the vast majority of the current nanotechnology research, which is really focused on objects that can be measured in nanometers, we had a vision for producing machines which can produce objects at the atomic scale, in particular to atomically precise specifications.

One of the major things that came out of that roadmap was to have a fairly distinguished panel of scientists and others in the field agree that it was a worthy goal and useful to pursue.  Another one was to get as many researchers together as possible and have them point out places that we could get to, short of the goal.  There was a certain amount of ontology building: we decided what sort of things we might actually be able to do, as well as reviewing our current state of knowledge. We also tried to assign some figures of merit and ways of measuring how far we had gotten along.  The result is a nice big document that you can get from the Foresight website.

agi-roadmap_01.png

Hugo de Garis and J. Storrs Hall at the 2009 AGI Roadmap Panel


Goertzel:  Do you think this is actually going to be valuable in getting work done?

Hall:   I certainly hope so.  I think the fact that it is there legitimizes some of that research a lot.  The other thing is that it does in fact give us things to shoot for.  They can point to the roadmap and say, “This is how far I’ve gotten. I’ve advanced this figure of merit this far.”  My experience has been that it is very difficult to get these figures of merit out of researchers.

Goertzel: You mean metrics for evaluating progress?

Hall: Right. How far you’ve gone and so forth. At the same time, once you have done it, it is the most valuable part of the whole effort.

Goertzel: In terms of AGI, I feel like first there is the question of “What is the goal?”  I mean, are we looking for something that holds a humanlike conversation, like the Turing test?  Is it a humanoid robot that acts somewhat like a human given certain situations?  There’s that question.  Then there is the question of how to make metrics to evaluate incremental progress toward that goal, which is probably a lot harder.

Hall:  I think it’s clear that if you get the right core of a system that is robust and learns from its own experience, that you can adapt it into any of the kinds of roles that you are talking about.  I think that one of the things that we should look at in terms of the roadmap is looking at what the core of that is.  What is it in building the AGI that we cannot get away without?  That’s sort of the ontology.

I have an interesting story.  A couple years ago I was at a AAAI fall symposium and at the reception I was talking to someone who was a mainstream university AI researcher, and the DARPA Grand Challenge had just been won.  Actually, several of these cars had completed a 130-mile course, where the previous year they had not been able to get more than one mile.  I was saying,”Wow, that’s a big advance in just a short amount of time.”  While he was saying, “Nah, that’s nothing.  They didn’t discover any new techniques.  They just took all the stuff that they knew how to do and put them together to make them work.”

To me, that’s progress.  The fact is, if you look at the field of AI as a whole, as much as it’s fractionated if you go all the way from mathematical machine learning to data mining, it may well be that we are getting closer to a point where we are on thin ice and it might collapse, the same way that we had all the techniques for automated driving.  When I say “collapse,”  I mean finding out how to integrate all that.  It might just be susceptible to a process of integration that we saw in the DARPA Grand Challenge.  In basically the course of two or three years people took all the stuff that they actually knew, put together working teams, and built the cars that could win the prize.

I think it is obviously going to be a bit harder in the case of AGI, with all the pieces of knowledge out there in the field of AI and computer science in general, as well as related fields such as neuroscience.  I think one of the keys is just to identify all the stuff that could go into a solution that we already know, and try to understand how hard it would be to put any two of these together.

Goertzel:   That is actually kind of a different approach.  Laying out a roadmap, as I understood it, would be to identify more functionally what needs to be done and what are the incremental stages.  There are so many different opinions on what stuff needs to go into it in order to achieve machine intelligence.  Getting agreement on that is going to be much harder.

Hall:  For a map you have the towns, but you also have the roads that connect the towns.

agi-roadmap_02.png

Itamar Arel:  I want to try to be pragmatic about this, because I think that is important.  If we want to make progress, we have got to have some charter here.  I think what Josh was alluding to is that there are really two steps here.  There is the first step of agreeing what are the axioms or core functional AGI attributes—and once we agree on those, the next step would be to create this roadmap with milestones leading toward human-level intelligence.  That, I think, is the overall goal.  I would argue, let’s first discuss these core attributes.  I feel that if we agree on those and make progress on them, then that’s a good first step.

Stephen Reed:  Just to back up what Itamar says, why don’t we just consider those goals, and I would add natural language understanding to those axioms, and leave it at that.

Arel:  I’m sure some people would disagree and say natural language understanding is not necessarily the first, second or third goal on this AGI roadmap.  I’m not saying I agree with that.  Maybe the first step should be to define whether an AGI system must have it on a functional level.

Reed:  I’m not proposing ordering the goals, whatsoever.  Just to have a bag of goals.

Mark Waser:  I think we need to commit to a consensus in a fairly short time frame.

Goertzel:  Getting people in this room to commit to a consensus does not necessarily make that much progress.  Itamar has created a wiki at agi-roadmap.org.

Arel:  That may be the first step, agreeing on these core functions for AGI.

agi-09.jpg

3 thoughts on “Further Steps Toward an AGI Roadmap

  1. Pingback: Back to Basics: Discover Analytics Checklist — Hobby Cash: Make Cash Blogging About the Things You Love

  2. Pingback: Back to Basics: Discover Analytics Checklist — Hobby Cash: Make Cash Blogging About the Things You Love

  3. Pingback: Wild Economic Game Changers for the Next Decade » The Online Investing AI Blog

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>