AGI from AI

Cross-posted from SIAI blog.

Will AGI emerge from a preexisting narrow AI field, incrementally improving?

In my opinion, the answer is likely no, but people working in narrow AI like to tell me that their work will eventually give rise to the Friendly AI I want to see.

Should the idea of AGI emerging from narrow AI be dismissed outright? Probably not. Let’s say AGI does indeed emerge from AI. If so, what are possible routes?

Can you think of any others? Different paths have different advantages from both a FAI and an AGI perspective. Some of these “narrow” applications, such as Novamente’s, are in fact built on an AGI-oriented architecture. Could the first AGI blindside us, by superficially appearing like narrow AI?

Comments

  1. Customer Support AI. In their struggle to keep support costs down, companies are already spending huge amounts of money on voice recognition solutions that try to reduce the need for employees. These solutions are still stupid, but they’re getting better.

  2. “Robotic” manufacturing technology, as it increasingly automates trial-and-error handling of problems at mini-, micro-, and nano-levels, constantly building (evolving, most likely) and testing models of real phenomena. I’m including the manufacture of molecules — drug design, materials sciences, and so on.
    You (and Waleed) are talking about systems that can evolve by interacting with humans; I’m talking about systems that can evolve by interacting with events, at an immensely faster pace. Interaction with humans comes later.

    (Or maybe not.)

  3. Bob Mottram

    I don’t think that an incremental path from AI to AGI can be entirely ruled out, and that some narrow AI systems are narrower than others. For example, a chess playing program developed along narrow AI lines will be unlikely ever to be able to play sudoku, but the kind of intelligence used in the recent DARPA urban challenge could perhaps be extended to include further navigational or perceptual skills until you have a vehicle which is really quite smart and could be used in a wide variety of situations.

  4. If I had to hazard a guess, I would say that it will be a convergent/syncretic set of less-narrow AI’s that ‘coalesce’ into full-blown AGI.

    That is to say, some sort of narrow goal-setter — perhaps customer-service or tactical combat AI, in combination with something like Novamente’s pattern recognition materials. When you get down to it, intelligence is — in my opinion — comprised of three major factors; the ability to recall, the ability to recognize patterns, and the ability to set goals. It may not be pretty — it may not even be something we notice, initially — but I strongly suspect that this will be the format of the first AGI. Whether it will be human-level is another story (I personally doubt it, if it happens spontaneously.) If it is engineered, it very well could be.

  5. I’ve always thought that the first AGI might arise out of something Google is doing. They have essentially all the ingredients for developing AGI (A significant slice of the world’s supply of brilliant people, nearly limitless resources, a premier computing infrastructure, the need/desire for very capable intelligent agents, etc).

  6. It will be from a narrow field, of course. In science axioms are simple, their effects are astounding. I see no reason why here should be any different.

    The hypothetical complexity in the root of the AGI – should be explained by some simpler terms.

    I guess, that the initial setting of soon to be smart machine must be surprisingly non-complex.

    I guess.

  7. I don’t agree, Michael. I find you as a person, who has a remarkably correct world view. Seldom wrong, also about some non essential things.

    The irony is, you may be wrong in the most important thing there is. How to make the AI. The same for Yudkowsky.

  8. Warren Bonesteel

    I’m not a scientist or an expert, here. However, over the years I’ve read and studied in a variety of disciplines. I’ve also attended numerous classes and seminars in many fields of endeavor.

    What always happens in the end is that reality is different than the one that was proposed by the experts and their theories.

    This will hold true for the ‘creation’ of AGI.

    In view of past events: When AGI ‘takes off,’ what we will hear from the experts will sound something like this: “We’re astounded! Our theories did not predict this!” (The theories, however, won’t be changed in view of the facts. The real world facts will be ‘adjusted’ to fit the theories. [Happens each and every time, whatever the discipline.])

    Currently, the normal commentary – and even most of the white papers and other peer-reviewed materials – about AI and AGI are often anthropomorphic in both content and tone. In the end, we’re not talking about something biological or hominid. We’re talking about something artificial. Alien. Unknown, perhaps unknowable.

    Currently, (according to the theories) everything that is needed to generate an AGI is already out there. It just needs to be put on the same network.

    …and, everyday, more and more computers – and computing power – *is* being networked.

    According to current theories, put the right hardware and software on the same network and you should ‘find’ an AGI. However, it probably won’t work the way you think it should. (Bohm, Pribram, et al did have a legitimate point.)

    IMO, I think Mike Johnson (above) is very close to being correct. When the right researcher puts his software on the right network at just the right time, you could end up with a ‘hard take-off.’ (GOOG, by itself, has more than enough processing power to make it happen…if even a portion of it gets networked.)

    (At the moment, I think an accidental hard take-off is a more likely scenario than I originally gave it credit.)

  9. Svante

    According to Marcus Hutter, AGI will emerge from really clever compression algorithms. It is only a matter of time before PAQ takes over the world…

    Okay maybe not really true, but the people in the Hutter Prize sphere seem to regard the Shannon Game as an alternative to the Turing Test.

  10. > According to Marcus Hutter, AGI will emerge from really clever compression algorithms.

    What else do we do, when we are seeing and recognizing things around us? What else do we do, when we are explaining something? Every intellectual advance is nothing else, than some more data compression.

    There are several in fact equivalent ways to A(G)I. Data compression _is_ one of them.

  11. Thanks for not mentioning ThoughtTrail there. ;D

    I think that it’s inevitable that a conversational structure will be required in modeling any of the first AGI’s, whatever the AI algorithm / data structure substrates for that. This is because self-reflectivity comes from low-level insights being built-up and compared against each other, and will until we reach the second wave of smarter, more rationally grounded AGI’s.

    Data compression is good because it’s about using shared relationships / low-level insights to define how to represent the information, which is after all, what a relationship is anyway.

    Either way would be pretty reasonable, but I have a specific way in mind, shhhh ;)

  12. What i do not realize is in reality how you are no longer really a lot more neatly-preferred than you may be right now. You are very intelligent. You know therefore significantly in relation to this subject, made me in my opinion imagine it from a lot of varied angles. Its like men and women don’t seem to be interested unless it is something to do with Woman gaga! Your personal stuffs excellent. All the time take care of it up!

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>