Ben Goertzel and J. Storrs Hall at the AGI-09 post-conference workshop
Following in the footsteps of AGI-08, the Future of AI workshop was held in conjunction with AGI-09. This year’s workshop, held Monday, March 9th, 2009, at the main conference venue of the Crowne Plaza National Airport in Arlington, Virginia featured a slate of invited talks as well as contributed papers and posters. The event was hosted by J. Storrs Hall, president of the Foresight Institute, and introduced the topic of the economics of advanced AI.
The following transcript of J. Storrs Hall’s introduction to the AGI-09 post-conference workshop has not been approved by the speaker. Video is also available.
Welcome to the Future of AGI
I’m Josh Hall. I’ve been doing futuristic stuff for awhile and have written a couple books, including one called Beyond AI. I’m now president of the Foresight Institute, which I am hoping to steer in the direction of looking at the future of AI.
Without further ado, let’s look a little tiny bit at the subject of what the future of AI might bring. One of the things is that the same worries that people have had about automation ever since there has been automation is that if AIs get as smart as we are, and continue to get cheaper the same way that Moore’s law has made computers get cheaper, they will economically crowd us out of the labor market, even for intellectual labor. People look at this and they say this is a horrible thing; no one will have jobs anymore.
On the other hand, if I built a robot to do my own work, I would want it to do my work. The whole point would be so that I did not have to do my work. Why can’t we, the human race as a whole, do the same thing?
If AIs are smarter than we are, won’t they take over and end the human era, as Vernor Vinge puts it? On the other hand, if they are really smarter than we are, shouldn’t they be the ones in charge? Why have dumb humans in charge if we have smart robots?
To look at it from another point of view, if you can build a machine that is smarter than you are, why can’t you build a machine that is morally better than you are? As Ron Arkin put it last year at this conference, “It’s a low bar.” If we can in fact build machines that are morally better than we are, don’t we have a moral duty to actually do that?
A few other things that we might try to think about today: are the robots themselves going to have rights? After all, we are talking about machines that are as intelligent as we are. By owning them, are we not just reintroducing slavery? If not, should they be able to vote? After all, they are going to be getting cheaper and cheaper, and more and more numerous. Near-term concerns such as legal issues: at what point do you start assigning blame to the actual robot, as opposed to the manufacturer who built it? That makes a big difference.
People claim, for example, that the German companies that had the self-driving cars back in the ’90s never followed them up and introduced them to the public at large because they were afraid of being sued for every accident that the cars might have. Even if the rate of accidents on the whole was considerably lower than the rate of accidents by human drivers, the company would stand to have its socks sued off.
Finally, the public perception of AI and robots has been very strongly influenced by alarmism in this movie industry. You see these horrible, evil scientists with these giant labs and so forth.
Marcus Hutter: If only!
J. Storrs Hall: That is the same question with the robots. If only we could have robots that can do this stuff that they do in the movies, when in fact the reality and the perception are quite different. In order to get any good public decision-making on the issue, a certain amount of education is necessary, I think.
With that, I will turn the stand over to Jim Albus, who is one of America’s leading roboticists. We have two really excellent speakers on the economics of AI this morning: Jim Albus and Robin Hanson. Let’s give a welcome to them.