Responding to Alex Knapp at Forbes

From Mr. Knapp’s recent post:

If Stross’ objections turn out to be a problem in AI development, the “workaround” is to create generally intelligent AI that doesn’t depend on primate embodiment or adaptations. Couldn’t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

But Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I don’t mean on a different level — I mean actually different. On this point, I’d highly recommend reading Temple Grandin, who’s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

My first point is hardly magical thinking — all of machine learning works to create learning systems that do not copy the animal learning process, which is only even known on a vague level. Does Knapp know anything about the way existing AI works? It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference. (Sometimes just around making a collection of heuristics and custom-built algorithms, but again that isn’t copying humans.) Approximations Solomonoff induction works quite well on a variety of problems, regardless of the state of comparing human and machine intelligence. Many “AI would have to be exactly like humans to work, because humans are so awesome, so there” proponents, like Knapp and Stross, talk as if Solomonoff induction doesn’t exist.

Answering how much or how little of the human brain is known is quite a subjective question. The MIT Encyclopedia of Cognitive Sciences is over 1,000 pages and full of information about how the brain works. The Bayesian Brain is another tome that discusses how the brain works, mathematically:

A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.

After an overview of the mathematical concepts, including Bayes’ theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.

The fundamentals of how the brain works, as far as I see, are known, not unknown. We know that neurons fire in Bayesian patterns in response to external stimuli and internal connection weights. We know the brain is divided up into functional modules, and have a quite detailed understanding of certain modules, like the visual cortex. We know enough about the hippocampus in animals that scientists have recreated a part of it to restore rat memory.

Intelligence is a type of functionality, like the ability to take long jumps, but far more complicated. It’s not mystically different than any other form of complex specialized behavior — it’s still based around noisy neural firing patterns in the brain. To say that we have to exactly copy a human brain to produce true intelligence, if that is what Knapp and Stross are thinking, is anthropocentric in the extreme. Did we need to copy a bird to produce flight? Did we need to copy a fish to produce a submarine? Did we need to copy a horse to produce a car? No, no, and no. Intelligence is not mystically different.

We already have a model for AI that is absolutely nothing like a human — AIXI.

Being able to quantify the difference between human and machine intelligence would be helpful for machine learning, but I’m not sure why it would be absolutely necessary for any form of progress.

As for universal measures of intelligence, here’s Shane Legg taking a stab at it:

Even if we aren’t there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can’t we agree that we should make responsible effort towards beneficial AI? Isn’t that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn’t as complicated and mystical as we had wished?

As to Anissmov’s second point, it’s definitely worth noting that computers don’t play “human-level” chess. Although computers are competitive with grandmasters, they aren’t truly intelligent in a general sense – they are, basically, chess-solving machines. And while they’re superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers.

This is true, but who cares? I didn’t say they were truly intelligent in the general sense. That’s what is being worked towards, though.

Now, I don’t doubt that computers are going to get better and smarter in the coming decades. But there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Moore’s Law and the simple fact that, in the realm of science, we’re only just beginning to understand what intelligence, consciousness, and sentience even are, and that’s going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think that’s going to be the case for centuries.

Let’s build a computer with true intelligence first, and worry about “consciousness” and “sentience” later, then.

Comments

  1. Matt

    Your post would read a lot better without that last paragraph. It goes from addressing the arguments to a lazy personal attack and argumentum ad populum.

  2. Alexander Kruel

    Nobody doubts that, in principle, you don’t need to imitate human intelligence to get artificial general intelligence, but that a useful approximation of AIXI is much harder than understanding human intelligence.

    AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself into the Matrix because you showed that in some abstract sense you can simulate every physical process.

  3. > Solomonoff induction works quite well on a variety of problems

    Solomonoff induction is a formalism, a mathematical model. It works perfectly on every problem (defining “problem” in a certain way.) But it is not a machine or an implementable algorithm; it is not even computable.

    Solomonoff induction is not an alternative to human thought, but rather helps us understand all thought, including the human kind. it formalizes Occam’s razor and Epicurus’ principle that one should consider all possibilities.

    Of course, humans are often irrational, and some machine learning techniques apply Solomonoff-based algorithms more directly, perhaps, than humans do.

  4. peterpicklepecker

    “My first point is hardly magical thinking — all of machine learning works to create learning systems that do not copy the animal learning process, which is only even known on a vague level. (snipped ignorant comment) It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference. (snipped ignorant comment)”

    First this is not correct some of the most successful machine learning algorithms are based on human inspired or animal inspired designs. The idealized models you point out have their place but they have yet to solve the complex problems humans solve every day.
    http://www-formal.stanford.edu/leora/egg-studialogica.ps
    Here is a great example simple problem, look at the issues with using formal models to solve it. (You probably shouldn’t try to read the paper it is really over your head)

    I would advise you look beyond AIXI and investigate other AI work.

    “Answering how much or how little of the human brain is known is quite a subjective question. The MIT Encyclopedia of Cognitive Sciences is over 1,000 pages and full of information about how the brain works. The Bayesian Brain is another tome that discusses how the brain works, mathematically:”

    Bayesian Brain Theory is one of many attempts to understand the brain. The MIT Encyclopedia of Cog. Sci. is hardly a proof of current understanding in Neuroscience. This poor excuse for a pretense at being an expert impresses nobody. People have made everything from bayesian logic to combinators fit what happens in the brain. You need to do your homework starting off with going to school.

    “The fundamentals of how the brain works, as far as I see, are known, not unknown. (snipped ignorant commentary about what “we know”)”

    Snip rest of post unread.

    Hmmm…. perhaps a better read of current neuroscience would help. Since you obviously have failed to see all the current debates on this very issue. There are still many unanswered questions about what is going on inside of neurons as far as specific neuronal computation and how said computations are chosen. It would also be wise to not make such statements when you yourself have no credentials in the field (did you even go to college).

    Does SIAI really support this? This kind of ignorant commentary really makes SIAI look stupid not to mention making an ass of yourself.

    This is yet another swing an a miss. Insert some belittling remark here, and the ridicule of the academic community of you and your little tiny mind.

    • Luke

      Maybe I’m just wasting my time addressing you, but even assuming all of your commentary is correct (and I have no experience in most of these fields and so am ill-equipped to determine if they are or not), how could you possibly expect anyone to take you seriously when you not only use a relentless ad hominem sledgehammer, but you state outright that you didn’t read this whole post?

      Am I being helpful, or is this guy a troll?

  5. Does Knapp know anything about the way existing AI works? It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference.

    I think you missed my point. My point is not that AI has to emulate how the brain works, but rather that before you can design a generalized artificial intelligence, you have to have at least a rough idea of what you mean by that. Right now, the mechanics of general intelligence in humans are, actually, mostly unknown. What’s become an interesting area of study in the past two decades are two fascinating strands of neuroscience. The first is that animal brains and intelligence are much better and more complicated than we thought even in the 80s.

    The second is that humans, on a macro level, think very differently from animals, even the smartest problem solving animals. We haven’t begun to scratch the surface.

    To use an analogy with flight, the principles of how birds flew through the air were known for centuries before Kitty Hawk. And scientists knew a great deal about lift, airflow, etc. well before the first plane was built by studying birds. Sure, planes don’t solve the flight problem the way birds do, but they rely on the same fundamental scientific principles.

    But before scientists knew anything about birds, we basically knew: (a) they can fly, (b) it has something to do with wings and (c) possibly the feathers, too. At that stage, you couldn’t begin to design a plane.

    It’s the same way with human intelligence. Very simplistically, we know that (a) humans have generalized intelligence, (b) it has something to do with the brain and (c) possibly the endocrine system as well.

    The above paragraph is a vast oversimplification, obviously, but the point is to analogize. Right now, we’re at the “wings and feathers” stage of understanding the science of intelligence. So I find it unlikely that a solution can be engineered until we understand more of what intelligence is.

    Now, once we understand intelligence, and if (and I think this is a big if), it can be reproduced in silicon, then the resulting AGI probably doesn’t necessarily have to look like the brain, anymore than a plane looks like a bird. But the fundamental principles still have to be addressed. And we’re just not there yet.

    Answering how much or how little of the human brain is known is quite a subjective question. The MIT Encyclopedia of Cognitive Sciences is over 1,000 pages and full of information about how the brain works.

    I correspond with lots of neuroscientists. Virtually all of them tell me that the big questions remain unanswered and will for quite some time.

    We already have a model for AI that is absolutely nothing like a human — AIXI.

    AIXI is a thought experiment, not an AI model. It’s not even designed to operate in a world with the constraints of our physical laws.

    Even if we aren’t there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens.

    My point is to recognize that the way machine intelligence operates, and will for the conceivable future, is in a manner that is complementary to human intelligence. And I’m fine with that. I’m excited by AI research. I just find it unlikely, given the restraints of physical laws as we understand them today, that an AGI can be expected in the near term, if ever.

    I am, however, excited at the prospect of using computers to free humans from grunt work drudgery that computers are better at, so humans can focus on the kinds of thinking that they’re good at.

  6. Intelligence in Black and White.
    The term intelligence has its Latin root in the verb inteligere which is composed by intus which means “within” and legere which means “to read”. A literal translation would then mean to read within or to realize, understand, be aware of, or know. Intelligence can be defined as the understanding of information in the form of thought, conclusion, solution to a problem; logic and intuitive discernment of the sum of the parts, having an abstract rearrangement in the mind leading to a new thought of different order.

    There is intelligence, and there is survival instinct, which is not the same. Survival instinct can be found in any living form that possesses mind stream or awareness of self and others. Intelligence, as defined, can only be found in the human species, as we are creation of a higher level of consciousness.

    Societies and civilizations are created through cognitive though, as the human mind is driven to form structures for organization and a set of social and moral rules to sustain such structures and create a sense of order in which the mind thrives and finds instinct-driven security. As Abraham Maslow defined the hierarchy of human needs in a pyramid model in which we can find physiological needs at the bottom, followed up by safety needs, love and belonging, esteem, and self-actualization at the top. This is a model of psychological needs in which humans find our sense of achievement leading to a feeling of what we describe as happiness.

    Elaborating further on the definition of intelligence, it can be classified in different subgroups: Psychological intelligence, the cognitive capability of learning and relating to others; biological intelligence, the innate capability of adaptation to new circumstances; and operational intelligence, the capability to discern functioning, build blocks, and rearrange when given a set of conditions. The classification of the types of intelligence is a vast subject of study. There are other classifications as the one proposed by the American psychologist Howard Gardner with his model of multiple intelligences: visual, bodily and kinesthetic, musical, naturalistic, interpersonal, intrapersonal, linguistic and logical and mathematical.

    It is important to mention aside the concept of artificial intelligence, conceived by the human mind as the process of development of a logic system (not alive) that is capable of rational discernment based on a set of Boolean instructions to maximize results while given a problem or task. A design of process that follows procedures given an input (in human world called stimuli) and it is capable of building new logic blocks that can become part of its architecture (in human world called learning). In other words it is a system capable of executing analog behavior to that of a human. It is interesting to note that such system returns a response (output) after searching all possible scenarios and solutions based on formal logic, (could chaos be writable and “booleanized”?, I venture to raise the question)

    It is worth to mention that one of the greatest achievements in artificial intelligence is said to be the passing of the “Turing Test” in which a human can be deceived if he is talking to another human or an A.I. system. In order for that to happen, the factor of unpredictability and randomness, as well as emotion cues must be present.

    Given my definition I would like to raise the question: given the fact that there have been remarkable minds in human history, genius minds in their particular field or in several fields simultaneously, would a superhuman intelligence rather consist on taking such brilliant unique minds a step further, with nanotechnology enhancements, or would a person with above average intelligence could be further enhanced to equate that of such genius?

    laura sfiat @Belleartworks

  7. After i retire I would appreciate to move to Hawaii.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>