Replying to Alex Knapp, July 2nd

Does Knapp know anything about the way existing AI works? It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference.

I think you missed my point. My point is not that AI has to emulate how the brain works, but rather that before you can design a generalized artificial intelligence, you have to have at least a rough idea of what you mean by that. Right now, the mechanics of general intelligence in humans are, actually, mostly unknown.

What’s become an interesting area of study in the past two decades are two fascinating strands of neuroscience. The first is that animal brains and intelligence are much better and more complicated than we thought even in the 80s.

The second is that humans, on a macro level, think very differently from animals, even the smartest problem solving animals. We haven’t begun to scratch the surface.

Based on the cognitive science reading I’ve done up to this point, this is false. Every year, scientists discover cognitive abilities in animals that were previously thought to be uniquely human, such as episodic memory or the ability to deliberately trigger traps. Chimps have a “near-human understanding of fire” and complex planning abilities. Articles such as this one in Discover, “Are Humans Really Any Different from Other Animals?”, and this one in New Scientist, “We’re not unique, just at one end of the spectrum” are typical from scientists who compare human and chimp cognition. It’s practically become a trope for the (often religious) person to say humans and animals are completely different, and the primatologist or cognitive scientist to say, “not nearly as much as you think…”

One primate biologist says this:

“If we really want to talk about the big differences between humans and chimps — they’re covered in hair and we’re not,” Taglialatela told LiveScience. “Their brains are about one-third the size of humans’. But the major differences come down to ones of degree, not of kind.”

There’s a really good paper somewhere out there on cognitive capacities in humans and chimps and how human cognitive abilities seem to be exaggerations of chimp abilities rather than different in kind, but I can’t find it.

Arguments that chimps and humans are fundamentally different tend to be found more often on Christian apologetics sites than in scientific papers or articles. The overall impression I get is that scientists think chimp cognition and human cognition are different in degree, not in kind. There are humans out there so dumb that chimps are probably more clever than them in many important dimensions. Certainly if Homo heidelbergensis and Neanderthals were walking around, we would have even more evidence that the difference between humans and chimps is one of degree, not kind.

Another point is that even if humans were radically different in thinking than animals, why would that automatically mean AI is more difficult? We already have AI that utterly defeats humans in narrow domains traditionally seen as representative of complex thought, no magical insights necessary.

Yet another possibility is one of AI that very effectively gathers resources and builds copies of itself, yet does not do art or music. An AI that lacks many dimensions of human thought could still be a major concern with the right competencies.

But before scientists knew anything about birds, we basically knew: (a) they can fly, (b) it has something to do with wings and (c) possibly the feathers, too. At that stage, you couldn’t begin to design a plane. It’s the same way with human intelligence. Very simplistically, we know that (a) humans have generalized intelligence, (b) it has something to do with the brain and (c) possibly the endocrine system as well.

I should think that many tens of thousands of cognitive scientists would object to the suggestion that we only know a “few basic things” about intelligence. However, it’s quite subjective and under some interpretations I would agree with you.

The above paragraph is a vast oversimplification, obviously, but the point is to analogize. Right now, we’re at the “wings and feathers” stage of understanding the science of intelligence. So I find it unlikely that a solution can be engineered until we understand more of what intelligence is.

The impression that one has here probably correlates with how much cognitive science you read. If you read a lot, then it’s hard not to think of all that we do know about intelligence. Plenty is unknown, but we don’t know how much more needs to be known to build AI. It could be a little, it could be a lot — we have to keep experimenting and trying to build general AI.

Now, once we understand intelligence, and if (and I think this is a big if), it can be reproduced in silicon, then the resulting AGI probably doesn’t necessarily have to look like the brain, anymore than a plane looks like a bird. But the fundamental principles still have to be addressed. And we’re just not there yet.

Yet formalisms of intelligence, like Solmonoff induction, are not particularly algorithmically complicated, just computationally expensive. Gigerenzer and colleagues have shown that many aspects of human decision making rely on “fast and frugal heuristics” that are so simple they can be described in pithy phrases like Take the Best and Take the First. Robyn Dawes has shown how improper linear models regularly outperform “expert” predictors, including medical doctors. Rather than possessing a surplus of cognitive tools for addressing problems and challenges, humans seem to just possess a surplus of overconfidence and arrogance. It is easy to invent problems that humans cannot solve without computer help. Humans are notoriously bad at paying attention to base rates, for instance, even though base rates tend to be the most epistemologically important variable in any reasoning problem. After you read about many dozens of experiments in heuristics and biases research where people embarrass themselves in spectacular fashion, you start to roll your eyes a bit more when people gloat about the primacy of human reasoning.

I correspond with lots of neuroscientists. Virtually all of them tell me that the big questions remain unanswered and will for quite some time.

I correspond with neuroscientists who believe that the brain is complex but that exponentially better tools are helping quickly elucidate many of the important questions. Regardless, AI might be a matter of computer science, not cognitive science. Have you considered that possibility?

AIXI is a thought experiment, not an AI model. It’s not even designed to operate in a world with the constraints of our physical laws.

Sure it is. AIXI is “a Bayesian optimality notion for general reinforcement learning agents”, a yardstick that finite systems can compare against. It may be that the only reason our brains work at all is because they are approximations of AIXI.

My point is to recognize that the way machine intelligence operates, and will for the conceivable future, is in a manner that is complementary to human intelligence. And I’m fine with that. I’m excited by AI research. I just find it unlikely, given the restraints of physical laws as we understand them today, that an AGI can be expected in the near term, if ever.

“If ever”? You must be joking. That’s like saying, “I just find it unlikely, given the restraints of physical laws as we understand them today, that a theory of the vital force that animates animate objects can be expected in the near term, if ever”, or “I just find it unlikely, given the restraints of physical laws as we understand them today, that a theory of aerodynamics that can produce heavier-than-air flying machines can be expected in the near term, if ever”. Why would science figure out how everything else works, but not the mind? You’re setting the mind apart from everything else in nature in a semi-mystical way, in my view.

I am, however, excited at the prospect of using computers to free humans from grunt work drudgery that computers are better at, so humans can focus on the kinds of thinking that they’re good at.

To be pithy, I would argue that humans suck at all kinds of thinking, and any systems that help us approach Bayesian optimality are extremely valuable because humans are so often wrong and overconfident in many problem domains. Our overconfidence in our own reasoning even when it explicitly violates the axioms of probability theory routinely reaches comic levels. In human thinking, 1 + 1 really can equal 3. Probabilities don’t add up to 100%. Events with base rates of ~0.00001%, like fatal airplane crashes, are treated as if their probabilities were thousands of times the actual value. Even the stupidest AIs have a tremendous amount to teach us.

The problem with humans is that we are programmed to violate Bayesian optimality routinely with half-assed heuristics that we inherited because they are “good enough” to keep us alive long enough to reproduce and avoid getting murdered by conspecifics. With AI, you can build a brain that is naturally Bayesian — it wouldn’t have to furrow its brow and try real hard to obey simple probability theory axioms.

Comments

  1. koreyel

    “It’s practically become a trope for the (often religious) person to say humans and animals are completely different…”

    The trope is part of a larger one: The Copernican principle. And the very species conceit that humans brought to their explanations of the cosmos is quite naturally transfered to all things mundane. Of course it packaged up in a learned dogma, whereby most of us were scolded in school to never anthropomorphize critters. They do not feel pain, they do not know joy, they do not think. Yoke them and cage them as you please…

    Here is a heads-up: If a conceit permeates our thinking about the cosmos and the mundane it can’t but help point out a fundamental flaw in us that drenches both our sciences and our religions and most importantly: our politics and our economics.

  2. Matthew Fuller

    WOW!

    Can you elaborate on this point?

    “It may be that the only reason our brains work at all is because they are approximations of AIXI.”

    It’s mathy, but why not elaborate? Given the difficulty involved, why even say it here? Maybe it’s worth a post on lesswrong? This really sounds like interesting speculation but certainly isn’t very convincing from the skeptics side of the equation.

    The “if ever” aspect of Alex’s reasoning is a convenient cop-out so he doesn’t have to think very hard about the real issues.

  3. Alexander Kruel

    The real problem is general intelligence. That expert systems are better at certain tasks does not imply that you can combine them into a coherent agency.

    The noisiness of the human brain might be one of the important features that allows it to exhibit general intelligence. Yet the same noise might be the reason that each task a human can accomplish is not put into execution with maximal efficiency. An expert system that features a single stand-alone ability is able to reach the unique equilibrium for that ability. Whereas systems that have not fully relaxed to equilibrium feature the necessary characteristics that are required to exhibit general intelligence. In this sense a decrease in efficiency is a side-effect of general intelligence. If you externalize a certain ability into a coherent framework of agency, you decrease its efficiency dramatically. That is the difference between a tool and the ability of the agent that uses the tool.

    Another problem is that general intelligence is largely a result of an interaction between an agent and its environment. It might be in principle possible to arrive at various capabilities by means of induction, but is is only a theoretical possibility given unlimited computational resources. To achieve real world efficiency you need to rely on slow environmental feedback and make decision under uncertainty.

    There are many question marks when it comes to the possibility of superhuman intelligence, and many more about the possibility of recursive self-improvement. Most of the arguments in favor of those possibilities solely derive their appeal from vagueness.

  4. Michelle Waters

    But would a brain that’s naturally Baysian out compete those who use heuristics? If it is is slow, requires gigawatts of power, breaks down after taking a hundred cosmic ray hits, or has other characteristics that don’t work in the real world it won’t get far.

  5. I think you’re missing my point. I don’t think that AI is impossible, or even that computers are capable of outperforming humans at certain things.

    What I question is the scientific basis from which artificial general intelligence can be developed. More specifically, my primary criticism of AGI is that we don’t actually know how the mechanism of intelligence works within the human brain (see e.g. Bradley Voytek or PZ Myers). Since we don’t know the underlying physical principles of generalized intelligence, the likelihood that we’ll be able to design an artificial one is pretty small.

    Moreover, electric circuits and silicon chemistry are limited in ways in which neurochemistry as found in the brain is not. The way that computers “think” now is quite different from how humans do, as a necessary factor of the hardware involved.

    Now, if you want to argue that computers will get smart at things humans are bad at, and therefore be a complement to human intelligence, not only will I not disagree with you, I will politely point out that that’s what I’ve been arguing THE WHOLE TIME.

  6. Sorry. Maybe I’m dumb, but I can’t figure out who’s saying what. You might consider inserting something like [MA] and [AK].

  7. Thanks for every one of your labor on this web page. My mother really likes making time for investigations and it’s easy to see why. A lot of people know all relating to the powerful tactic you offer efficient items on the web blog and in addition invigorate response from visitors on that theme while our own girl is without a doubt studying a lot of things. Take pleasure in the remaining portion of the year. You have been performing a tremendous job.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>