From Mr. Knapp’s recent post:
If Strossâ€™ objections turn out to be a problem in AI development, the â€œworkaroundâ€ is to create generally intelligent AI that doesnâ€™t depend on primate embodiment or adaptations. Couldnâ€™t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?
But Anissmovâ€™s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply â€œworkaroundâ€ the issue misses the underlying point that we canâ€™t yet quantify the difference between human intelligence and machine intelligence. Indeed, itâ€™s become pretty clear that even human thinking and animal thinking is quite different. For example, itâ€™s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I donâ€™t mean on a different level â€” I mean actually different. On this point, Iâ€™d highly recommend reading Temple Grandin, whoâ€™s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.
My first point is hardly magical thinking — all of machine learning works to create learning systems that do not copy the animal learning process, which is only even known on a vague level. Does Knapp know anything about the way existing AI works? It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference. (Sometimes just around making a collection of heuristics and custom-built algorithms, but again that isn’t copying humans.) Approximations Solomonoff induction works quite well on a variety of problems, regardless of the state of comparing human and machine intelligence. Many “AI would have to be exactly like humans to work, because humans are so awesome, so there” proponents, like Knapp and Stross, talk as if Solomonoff induction doesn’t exist.
Answering how much or how little of the human brain is known is quite a subjective question. The MIT Encyclopedia of Cognitive Sciences is over 1,000 pages and full of information about how the brain works. The Bayesian Brain is another tome that discusses how the brain works, mathematically:
A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.
After an overview of the mathematical concepts, including Bayes’ theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.
The fundamentals of how the brain works, as far as I see, are known, not unknown. We know that neurons fire in Bayesian patterns in response to external stimuli and internal connection weights. We know the brain is divided up into functional modules, and have a quite detailed understanding of certain modules, like the visual cortex. We know enough about the hippocampus in animals that scientists have recreated a part of it to restore rat memory.
Intelligence is a type of functionality, like the ability to take long jumps, but far more complicated. It’s not mystically different than any other form of complex specialized behavior — it’s still based around noisy neural firing patterns in the brain. To say that we have to exactly copy a human brain to produce true intelligence, if that is what Knapp and Stross are thinking, is anthropocentric in the extreme. Did we need to copy a bird to produce flight? Did we need to copy a fish to produce a submarine? Did we need to copy a horse to produce a car? No, no, and no. Intelligence is not mystically different.
We already have a model for AI that is absolutely nothing like a human — AIXI.
Being able to quantify the difference between human and machine intelligence would be helpful for machine learning, but I’m not sure why it would be absolutely necessary for any form of progress.
As for universal measures of intelligence, here’s Shane Legg taking a stab at it:
Even if we aren’t there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can’t we agree that we should make responsible effort towards beneficial AI? Isn’t that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn’t as complicated and mystical as we had wished?
As to Anissmovâ€™s second point, itâ€™s definitely worth noting that computers donâ€™t play â€œhuman-levelâ€ chess. Although computers are competitive with grandmasters, they arenâ€™t truly intelligent in a general sense â€“ they are, basically, chess-solving machines. And while theyâ€™re superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers.
This is true, but who cares? I didn’t say they were truly intelligent in the general sense. That’s what is being worked towards, though.
Now, I donâ€™t doubt that computers are going to get better and smarter in the coming decades. But there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Mooreâ€™s Law and the simple fact that, in the realm of science, weâ€™re only just beginning to understand what intelligence, consciousness, and sentience even are, and thatâ€™s going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think thatâ€™s going to be the case for centuries.
Let’s build a computer with true intelligence first, and worry about “consciousness” and “sentience” later, then.