On Digg today:
Turns out that it’s from the gossip blog Valleywag. Its author recently attended the Singularity Summit. Looks like the topic is staying in his mind a bit! Of course the recent George Dyson article (“Turing’s Cathedral”) likely contributed as well.
Is the idea plausible? For Google to have a chance at reaching real AI, they would have to make it a priority. As in, would they need to create a project specifically devoted to it, and nothing else. They would need to put a dozen or more supergeniuses to work full-time for years on end, at a likely cost of tens of millions of dollars with no substantial return in the forseeable future. Do any Singularity-watchers think this could happen? Not before some other group has made substantial progress already, is my guess.
What are the necessary conditions for any group having even a chance at AGI (artificial general intelligence) in the next couple decades, or before nanocomputing, whichever comes first? Here’s what I’m throwing out there:
Deep pockets. Enough funding that the project can proceed without having to worry about commerical spin-offs. So we’re talking about a government, corporation, startup, or non-profit with at least a couple million in the bank, perhaps more.
Exceptional brains. To get there first, core team members will need to be the best there is, close to the upper boundaries of what is possible with human intelligence. We’re talking people that are 1/100,000, not 1/10,000 or 1/1000.
Education in the right fields. Universities don’t offer degrees in Artificial General Intelligence. The knowledge set necessary to create AGI successfully is not known, but it will likely encompass all the following fields: cognitive science, math, programming, probability theory, traditional AI techniques, information theory, and maybe more. It is not a knowledge set that an employee at Google will just happen to be familiar with, even if they are well-educated.
Math and programming talent. It’s one thing to have a high IQ and be educated in the multiple necessary fields. However, if you’re going to do successful engineering, it’s likely that you’ll have to be specifically talented at implementing your ideas in code. There are plenty of really smart, really well-educated interdisciplinary scientists out there that are simply not engineers and can only program at an average level. They write papers and give lectures that are brilliant, but when it comes to actually building a huge program, the spark is just not there. And last but not least…
A correct theory. For every 10,000 theories of general intelligence that sound great, feel great, there may be only one (or zero) that can be implemented on available hardware without complexity overload or centuries of debugging. It doesn’t matter if you’re the smartest person in the world, if you settle on the wrong theory, and try to implement it, it just won’t work. Then you’ll need to tear everything down and start over, most likely from scratch. If only one person is in charge of the overall theory, then all it takes is that one person to be wrong for everyone’s time to be wasted.
As we can see from the above requirements, they’re not fulfilled yet. They could be before the decade is out, though, which would lay the groundwork for real progress.
Tangentially: a couple days ago, Bruce Klein, President of Novamente LLC, claimed here that “we estimate it will take 6 years w/ a full-time staff (about a dozen programmers) to reach human-level AI”. How can they be so sure about timeframes? Because they are convinced they already have a theory that will work, and they are just implementing it. Should we be skeptical of this claim? I certainly think we should. More often than not, when people think they have the right theory, it turns out they don’t. This doesn’t mean it will never happen… just that I doubt that anyone can be so sure so far in advance.
In case you’re wondering, a pdf describing Novamente’s theory of intelligence can be found here.
Also, here is a comprehensive list of projects working towards AGI today, in June 2006. I think this was also put together by Bruce Klein.