The China-Brain Project

china_brain_project.jpg

Professor Hugo de Garis has been given a grant by Xiamen University in Fujian Province, China to build an artificial brain consisting of 10,000 – 15,000 neural net circuit modules evolved in an accelerator board 50 times faster than in a PC.  He is scheduled to head a conference session on the subject of artificial brains in May at AGI-09, the second conference on artificial general intelligence, after which he will be teaching at the first AGI Summer School in Xiamen, China in June.

The following transcript of Hugo de Garis’s AGI-08 presentation “The China-Brain Project” has not been approved by the speaker. Video is also available.

agi-08_logo.png

The China-Brain Project

Thirty seconds on “Why China?” Some of the cities in the Southeastern part of China are growing at 15%, 20%, and the richest city in China, Shenzhen, is growing at 30% a year. There is tremendous growth, and hence, tremendous opportunities. I can now use myself as an example for what is possible—I just walked in, and said I want an EE guy, I want a robot guy, post-docs and professors… and they just gave them all to me. When words gets out…

So, what am I trying to do? I use genetic algorithms to evolve neural network modules. You can buy fairly cheap accelerator boards for about $1000, and with those you can accelerate the evolution of these neural nets about fifty times faster than you can on a PC. Then you evolve large numbers—tens of thousands, probably up to about fifty thousand—of these neural net modules and download them into the PC. The whole approach is very cheap. It is literally a few thousand dollars, because it is PC-based. The artificial brain itself is situated in the PC. Then you use that artificial brain by radio control (two-way radio antenna) to control robots, for example.

The Chinese have given me three million RMB, but then you need to consider the purchasing power. It is equivalent to a nice fat three million dollar grant in purchasing power terms, but the pressure is on because I’m being paid about six times what Chinese professors would get, so I really have to deliver in the next two years. I’m told if I do a good job there will be more millions.

Somewhere between 10,000 and 15,000 individually evolved neural networks together, connected in interesting ways, that of course is the research challenge. What can you do with an artificial brain that, by definition, is a network of networks—a network of evolved neural networks. The accelerator board contains a chip with about three million logic gates. The next version is double that, so Moore’s law is definitely working. The whole approach is very cheap, so I hope it will be popular.

china_brain_04.png

Here is the actual board itself. The company making this is Celoxica, a British company. You do not need to be an EE, as you simply write a version of C called handle-C that is about 80% the same as C. That gets hardware compiled into this FPGA. Anyone can do it: you write your code, it gets compiled to this chip, and runs at electronic speeds. Hence the fifty-fold speed-up. In other applications, people are getting 200, 300 times the speed-up.

china_brain_05.png

I have been trying to do this kind of thing for a long time. I was eight years in Japan in the ’90s. The Japanese built this very expensive machine for half a million dollars. This is $1500 and this is half a million dollars, doing the same job. That’s Moore’s law for you.

How many people would you need to make a minimum artificial brain of ten thousand modules? A very conservative estimate: a four-year period and fifty working weeks per year, five days a week. “Brain architects,” I call them, conceive and evolve only two neural net modules per day.  How many people would you need over that four-year frame to produce a brain of 10,000? Do the math, it’s pretty simple. It’s about five. That is the number that I have at the moment, although I could get as many Masters students as I want, and probably two or three PhDs.

Should I emphasize that this is a method, or should I put more emphasis on the architectures? The method, I think, is important. It’s cheap, it’s fast, and it works, so hopefully other groups will do it. Common sense says each group will have its own architecture. There will be many different architectures, so should I focus on my group’s particular architecture? The big challenge now is what could we do with several tens of thousands of evolved neural networks.

That is our research challenge. If we can come up with something that is interesting and that has literally hundreds of different behaviors and thousands of different pattern recognizer circuits, that is persuasive. Hopefully, a year from now at AGI-09, I will have something interesting.

If you download each evolved model from the board into the PC, and you do that tens of thousands of times, it is tedious in that sense. What is the maximum number of modules that your PC can signal in real time? Every neuron is signaling at 25 hertz. We tested that empirically and found, depending on the PC, it was in this range between 10,000 and 50,000 neural nets, each neurally signaled sequentially. The big question now is, what can you do with this number of modules? My intuition says probably a lot–many hundreds of different behaviors.

What approach will we take? I’m reading neuroscience like crazy for ideas. Frankly, not very helpful. What is a thought, how do you make a decision, what is memory? Basic questions and neuroscience still has to contribute a lot more than it has. I will take an incremental approach, start very simply by having twenty modules connected, and then work up: 20, 50, 100, like the denominations of bank notes.

The brain itself you will not see. That will be in the PC—large numbers of interconnected neural net modules. What you will see is the behaviors of the robot. The artificial brain in the PC will send out by radio antenna control signals to the robot. I have money to build a more interesting robot than the one I have. We will build a more ambitious robot with eye vision, hearing, arms.

I will not be talking about architectures because I have not done the work yet. With Moore’s law, pretty soon we will be getting up to possibilities like a hundred thousand modules, even a million. It is just not practical to evolve one module after another in these large numbers. That is going to put pressure on the research topic of simultaneous multi-module evolution.

Jumping ahead a bit, just for fun, I’m anticipating that there will be national brain-building organizations in the not-too-distant future. When we look at the history of the rocket, in the ’20s Goddard‘s MIT colleagues called him “the moon man,” and it was a derogatory term. He genuinely believed that with liquid fuel rockets, the principle would allow you to send rockets to the moon. By the ’40s, you had the V2. When the German rocket scientists were captured by the Americans and the Russians, the Americans were surprised to learn that the Germans were surprised by the Americans’ surprise, because they said, “We learned it all from Goddard.”

I see an analogy between all this and brain-building projects. I’m contracted to write a book, if you are interested, but you will have to wait two years.

de-garis-bio.png

One thought on “The China-Brain Project

  1. I am sorry, but funding provided a few years back for Professor Hugo de Garis in SkyLab has been revoked after I am asking a simple question: how he will prove that his kitten is smarter than real one. Now I have a next question for him: could he explain, if he believes that our brain is an information processing unit, how that information is reaching a brain?

    It is better to provide grants to me, if you would like to build an artificial system capable to demonstrating a reasonable behavior.
    We could have a working system in 3-5 years.

    I could deliver.

    Best regards, Michael

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>