Cognitive Architectures: Where Do We Go From Here?

cognitive_arch_banner.jpg

Cognitive architectures play a vital role in providing blueprints for building future intelligent systems supporting a broad range of capabilities similar to those of humans. How useful are existing architectures for creating artificial general intelligence? At AGI-08 Wlodzislaw Duch presented on a critical survey by the speaker, Richard Oentaryo and Michel Pasquier of the state of the art in cognitive architectures, providing a useful insight into the possible frameworks for general intelligence.


 

The following transcript of Wlodzislaw Duch’s presentation from AGI-08 on the paper “Cognitive Architectures: Where Do We Go From Here?” has been corrected and approved by the speaker. Video is also available.

Cognitive Architectures: Where Do We Go From Here?

cogn_arch_1.png

This is kind of an overview. If you want to do something new, you have to know what has been done, and that is very uncommon. People jump on a new idea and work on it, ignoring whatever has been done before.

cogn_arch_2.png

As you know, AI has failed in many ways, Ben talked about that. Eduardo Caianiello in 1961 had his mnemonic equation, and he claimed it helped to understand all kinds of behaviors. Of course it was too general, and it has not been used too much. Then we had fifth generation computer projects. These projects failed for many reasons, but we were not sure why.

cogn_arch_3.png

Why has they failed? Was the approach to naïve? Have we focused on applications, for example, and so people come up with general theories like mnemonic equations, which are not useful in any particular case? Maybe it is because we do not address the main challenges.

cogn_arch_4.png

There have been very ambitious approaches, as you know. More recently, Cyc was in development since 1984 and commercialized in 1995. Now CyCorp has published two and a half million assertions that link over 150,000 concepts using thousands of micro-theories. CycNL (natural language) has been a potential application for so many years. Every year I look there and I hope that something has happened. Nothing has happened yet with CycNL.

There have been other very interesting language-related projects. For example, I remember John Taylor getting very excited about Hall baby brain, saying now the system is going to learn like a baby. It has been around for maybe six or seven years, and again, nothing has happened. The Open Mind Common Sense project at MIT has been collecting lots of assertions.

cogn_arch_5.png

Maybe one of the problems has been that people have been too ambitious instead of going the Google way and doing simple things first but quite well. We have already quite a number of ambitious projects. If we do not learn from them, we are going to make the same mistakes.

Well, language is one form of intelligence. You have the Turing test, the Loebner Prizeóthe programs that have won it, all of them have been based on template or contextual pattern matching. It shows that cheating can get you quite far, because these programs sometimes fool the judges. Maybe naive judges, of course.

What other ways of testing ambitious steps should we define to see if there is progress? One is a personal Turing test that has been proposed by Carpenter and Freeman, where programs try to impersonate a real person that is known to the individual, so we want to find out whether it is the real person or not. That is also very difficult.

There are all these question-answer system’s competitions in text retrieval conferences, and that is something that you can measure quite well. In the language domain, we can do that. My favorite that has not yet been done in a large-scale competition, is word games, you can have things like twenty question game where the machine tries to guess what you have in mind. I think it is much better than games like chess, because in chess people immediately say, “This is because the machine is so fast and has such a good memory.”

Now, why should the speed of the machine help the machine guess what you have in mind better than humans do? Can we make a competition in something like the twenty questions game with humans? That would be very interesting, and simpler than other competitions, like the Loebner or personal Turing test, because all you have to do is know about objects and their properties. You do not have to know about complex relations, and I think that shows that there will be many applications where a full understanding of the text is not really needed. You can approach it step-by-step.

cogn_arch_6.png

More recently super experts in narrow domains have been proposed, but we still need a lot of general intelligence to be flexible and communicate with people. Feigenbaum has proposed that they should reason in mathematics, bioscience or law, and experts would pose problems and probe the understanding of the system. If we could really have a super expert that, say, a lawyer could not beat, that could be used as an artificial lawyer. That could be a very nice challenge and quite specific to work on.
In the same direction, there are automatic theorem proving competitions at some conferences, trying to prove as many theorems as possible. That works very well, but of course they do not have any intelligent interface – you cannot talk or argue with them like with mathematicians, but at least this is something feasible. In the end we can talk about making programs that would be partners and give advice to humans in their work. Now lots of mathematicians are checking whether their proofs are correct by using automatic programs, but here we want to generate lots of creative ideas and find interesting associations.

General AIs which would be general theorem provers would perhaps use meta-learning techniques which would specialize in many subfields, like right now the automatic theorem provers at these competitions. It would be a meta-approach where we would have lots of specialized modules, but overall the whole system becomes a generally intelligent mathematician.

A big challenge is in the genomic/pathways databases in understanding biological systems, because this really is something that the human mind is not able to do. I have some colleagues working in bioinformatics that are experts in one, two, maybe three proteins. They are talking about millions of proteins in the cell. How do we handle that? People try to have symbolic models of different metabolic and genetic processes, and search for interesting papers. There are people that are curating these databases, adding the missing information. This can be automatized, and has been automatized to some degree.

cogn_arch_7.png

These challenges point the way to real artificial general intelligence. What will be the real artificial general intelligence? General purpose systems could be taught skills that would be needed to perform human jobs and then we can measure which fraction of these jobs can be regularly done by an AI system. This has been proposed by Nilsson, but in fact Turing already talked about a “child machine” that would learn things. The knowledge-based information processing, for example, can be automatized and progress measured by passing a series of examinations – for example, in accounting.

DARPA has announced a number of interesting challenges. In humanoid robotics they are studying perception, attention, learning, causal models from observations and hierarchical learning from different temporal scales. The DARPA call on “personal assistants that can learn” has been quite interesting. The Stanford Research Institute and 21 other institutions are participating in the program, which is a five-year program to create programs for personal assistance rather than complete replacements for human workers. There are a lot of programs associated with this. Indeed, people are going in the direction of what could be called AGI.

cogn_arch_8.png

We have different architectures and we try to categorize them. Newell has done that in the paper in 1990 where he talks about twelve criteria for cognitive systems: behavioral, adaptive, dynamic, flexible, development, evolution, learning, knowledge integration, vast knowledge base, natural language, real-time performance and brain realization. Only a few architectures have been analyzed in this way, and since we did not have enough space and have a much longer version of this paper, we’ll look here at different cognitive architectures and categorize them as symbolic, emergent and hybrid, talking about the type of memory – whether it is local, global, or rule-based.

cogn_arch_9.png

I will be mentioning symbolic, emergent and hybrid architectures very quickly. We have many of the originators of these architectures among us. The architectures are obviously related to the type of problems that we work on. The majority of symbolic cognitive architectures are based on the idea of Newell and Simon, idea of a physical symbol system. In symbolic architectures we have centralized control over the information flow from sensory inputs through memory to motor outputs, and usually logical reasoning, rule-based representations of perception-action memory, working memory and executive functions, and of course semantic memory.

cogn_arch_10.pngcogn_arch_11.png

Graph-based representations are fairly common, which could be semantic memories or variants of them, and conceptual graphs, frames/schemata, reactive action packages. The analytic and inductive learning techniques are used. In the case of analytical learning we have to infer other facts that they entail logically, for example by the explanation-based learning or analogical learning. Inductive learning goes from examples to general rules, for example in the knowledge-based inductive learning, or delayed reinforcement learning.

cogn_arch_13.png

The second group covers the emergent cognitive architectures. Some people think that intelligence might emerge from connectionist models, networks of simple processing elements. They are either globalist or localist, meaning either we have networks which have parameters which are localized, that is all parameters have influence on all the others. Or we have local types of expansions, like radio based function expansions. We may have modular organization in connections, models creating subgroups of processing elements that react in local ways. Learning methodologies are diverse: heteroassociative supervised and reinforcement learning, competitive learning (winner takes all or winner takes most), correlation-based learning using Hebb rules that create general models.

The complex reasoning emergent systems are behind symbolic architectures, but they may be closer to natural perception and reasoning based on perceptions. We have a kind of schema of the brain. We know which parts interact with which. We do not know many details, though. For example, two days ago I heard about this subtle difference between fear and apprehension that has now been mapped to the specific locations in the amygdala.

cogn_arch_14.png

These things are coming slowly, but at least we know enough to formulate some interesting architectures. For example, O’Reilly and Munakata wrote quite a nice book supported by the PDP++ simulator. They have formulated the integrative biologically-based cognitive architecture, which uses three kinds of memory: the posterior cortex memoryñwhich has overlapping distributive localist organizationñthe frontal cortex memory with non-overlapping, recurrent localist organization, and the hippocampus, with sparse, conjunctive localist organization. They have defined a kind of learning algorithm called LEABRA, which combines the Hebbian learning, for learning the structure of the environment, with some task-based learning.

cogn_arch_15.png

The higher-level cognition emerges from activation-based processing, and there are lots of psycho-physical tasks that the system is capable of, but it is not clear how to scale it up to really large artificial general intelligence. The Blue Brain project is using quite detailed models. They have a model of the whole column with 10,000 neurons in it now. They can connect tens of millions of synapses from the neuro-anatomical data in a correct way in 3D and have a really fantastic tool for doing that. Can we abstract some principles from that so we can understand what it is doing? Can we scale it up? It is not clear that anything like that will come out of the project.

cogn_arch_16.pngcogn_arch_17.pngcogn_arch_18.pngcogn_arch_19.png

Simpler approaches include Edelman‘s NOMAD, which has been in development for over twenty years. Lots of interesting architectures are basically hybrid architectures. The ACT-R is perhaps the most famous of these. These architectures have been used in many smaller scale projects that are not aimed at general applications. The Polyscheme is very interesting because it integrates multiple representations, reasoning and inference schemes in problem solving. Specialist models capture different aspects of the world. There are very interesting things you can do with this.

cogn_arch_20.pngcogn_arch_21.png

 

4CAPS is a very interesting architecture, which can not only try to explain what happens in the brain when we reason, but also try to map it to fMRI images and tell you which parts of the brain are going to be active. A very interesting architecture called DUAL that Boris Kokinov has defined in 1994, has been inspired by Minsky‘s Society of Mind. Shruti has been defined by Shastri in 1993, but not much has happened since then. I think it is very interesting architecture, where you look at synchronized firing of nodes which represents dynamic binding.

cogn_arch_22.pngcogn_arch_23.png

So, where do we go? We need some grand challenges, I believe, and smaller steps that will lead us to human and superhuman levels of competence. That should be formulated to focus our research. We can extend small demonstrations in which a cognitive system reason in a trivial subdomain to results that may be of interest to experts, or could act as an assistant to human experts. But what type of intelligence do we need? Gardner has defined in 1993 that we have at least seven kinds of intelligence: logical-mathematical, linguistic, spacial, musical, bodily-kinesthetic, impersonal and intrapersonal.

cogn_arch_25.png

If you want to have an intelligent jazzman improvising with you, maybe that does not have to be based on general intelligence. It has to be more specialized, but an AGI does not have to be very general. It has to be sufficiently broad to accommodate these different types of things. Should the system embody behavioral intelligence? Brooks wrote that elephants don’t play chess. Maybe they are wise in their own specific way, but they won’t play chess.

cogn_arch_26.png

In 2005 there has been an evaluation called Agent-Based Modeling and Behavior Representation (AMBR) Model Comparison of performance of humans versus cognitive architectures in a simplified air traffic controller environment. There are some tools to evaluate how far we are. In 2007, the AAAI Workshop “Evaluating Architectures for Intelligence” proposed several ideas, for example, creating in-city driving environments, measuring incrementality and adaptivity components of generally intelligent behavior.

cogn_arch_27.pngcogn_arch_28.pngcogn_arch_29.png

We can look at brain-inspired cognitive architectures as an approximation of what brains are doing in many kinds of ways. We try to use this in language problems by accumulating knowledge from different sources. The last thing I will show is this ICD-9 coding challenge we had last year, that has been done at the conference that I was co-chairing in Honolulu. Basically, five groups have beaten three commercial companies in the accuracy of assigning codes using real hospital discharge summaries, very messy texts. The IDC-9 are codes used to get money from insurance companies, their assignment is a fairly difficult task. That shows that we can define some tasks where natural language processing systems can compete with humans.

cogn_arch_33.png

agi-08_logo.png

 

One thought on “Cognitive Architectures: Where Do We Go From Here?

  1. Pingback: Best of PDI » Blog Archive

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>