What Do You Mean by “AI”?

franklin_wang_tn.jpg

Stan Franklin and Pei Wang at AGI-08

Many problems in the field of artificial intelligence can be traced back to the confusion created by differing research goals. In his presentation at AGI-08, Pei Wang clarified and compared five typical ways to define AI. He argued that though they are all legitimate research goals, they lead the research to very different directions, and most of them have trouble giving AI a proper identity.

The following transcript of Pei Wang’s AGI-08 presentation “What Do You Mean by ‘AI’?” has been corrected and approved by the speaker. Video is also available. A pdf file of the paper is available on the speaker’s selected publications page.

mean_by_ai_header.png

What Do You Mean by “AI”?

mean_by_ai_01.png


As you can tell from the title, my paper is about the definition of AI. I know that this–the text in red in the following slide–is what many of you have in your mind.

mean_by_ai_02.png

Why do I not talk about my concrete research into AGI but talk about this general topic? Because it is a very important question. Here, by a “working definition” I am not really talking about the sense of a word, but about the research goal. It is something each of us has to face, and it decides almost everything in the research. So no one can really be neutral about what intelligence is.

mean_by_ai_03.png

My disagreement with traditional AI is not really about their solutions, but about their problems. Also, to a large extent what makes AGI different from traditional AI first is the research goal, then is about the solution proposed. Furthermore, among the current existing AGI projects, the difference to a large extent comes from our different understanding and conception of “intelligence”. I agree with many things the previous speakers said, but just want to add: Why do people take different paths? Why do people evaluate progress differently? To a large extent because we actually have different goals in our minds, even though we all call it “AI” or “AGI.”

mean_by_ai_05.png

We do agree on something. We agree that the best example of intelligence, so far, is human intelligence. Whatever we do, we have to learn from that. On the other side, I also assume that we all agree we cannot really duplicate all the details of the human mind. That would be an artificial person, which may be an interesting project, but it is a different story.

mean_by_ai_06.png

As far as the human mind is concerned, which part do we want to duplicate, and which part do we want to ignore? In my paper I analyzed five different ways of categorizing human intelligence that we could use a computer to reproduce. What the difference is: each category actually defines a valid research goal. All of them have value, but they have different values. They do not lead to the same place.

mean_by_ai_07.png

I know that many people here believe that, but I do not – I don’t think all those things are different trails to the same summit, or different parts of the same whole. To me, they are actually different goals, even though clearly they are related to each other. That is why we can all meet in the same room and listen to each other.

mean_by_ai_08.png

The situation to me is like this: We have an object that we want to describe, or draw a picture of it, but the way you draw this abstraction ends up with different things. To ask which one is the “correct” one is the wrong question. Each of them capture the object, but in a different way.

mean_by_ai_09.png

Why not have all of them together? To me there are two reasons. One is a technical reason. All those aspects I described (structure, behavior, function, etc.) are unified in the human mind. But when you try to reproduce them using computer technology, they become separate, to the extent that the best way to approach one is usually not the best way to approach the other. That is a technical reason why we will not have all of them together.

To me, there is also a deeper reason. Even if technically we can actually do that, that is not really what we want. At least to me, a motivation behind AI is actually to explore other ways to produce intelligence. Personally, if at the end we realize that the only way to get intelligence is to accurately duplicate this [the brain], I will be very disappointed.

What we really want is: in certain aspect, to be as faithful to the human mind as possible, but in some other respects, we’d say that does not matter — the human mind is just one way to do it, and we have other ways.

mean_by_ai_10.png

To summarize, I am not suggesting we should start by talking about definition until we find the best one. I’m also not suggesting that even if we cannot find a perfect one, we can find one we all agree on. No, that is not going to happen. I also am not saying that since all of them have value, they have the same value.

mean_by_ai_11.png

What I am saying is: select your research goal carefully. That matters because it does not all lead to the same end. Also, make yourself clear when talking about your research. Do not just call it “AI” and assume everyone will get the same meaning. They don’t. When you evaluate other people’s work, keep in mind that using the term “AI” they may mean something different, which may also have value in it.

For the field of AGI, I guess in the near future we will have to deal with this diversity. We will have to live with multiple goals, multiple theories, multiple technologies, and so on. At the same time, we compare them and discuss them. We don’t say that everything goes. Thank you.

pei-wang-bio1.png

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>