Is Smarter-than-Human Intelligence Possible?

Florian Widder, who often sends me interesting links, forwarded me to an interview that Russell Blackford recently conducted with Greg Egan. The excerpt he mentioned concerns the issue of smartness and whether qualitatively-smarter-than-human intelligence is possible:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

I regard this as garden-variety anthropocentrism and basically the heliocentrism of cognitive science. It dovetails perfectly with theological notions of humanity. The simplest assumption is that humans are notthe center of the cognitive universe. The notion that we primitive humans are basically equal to all higher forms of intelligence, even if they are Jupiter Brains with quintillions of times greater computational capacity than us and can think individual thoughts with more Kolmogorov complexity than the entire human race, is pretty silly.

The transition from early hominids to humans produced a qualitative change in smartness — why should we assume we’re the end of the road? Just like there are optical illusions that our minds aren’t sophisticated enough to see through (though I’m sure we can come up with dumb excuses), there are cognitive illusions that humans are programmed to be fooled by. There are so many of them that there is a huge field of study devoted to it — heuristics and biases.

Without qualitative improvements to the structure of intelligence, we will just keep making the same mistakes, only faster. Experiments have shown that you cannot train humans to avoid certain measurable, predictable statistical errors in reasoning. They just keep making them again and again. In the best case, they can avoid them only when they are using a computer program set up to integrate the data without making the mistake. These basic findings prove that qualitative improvements in intelligence are possible, and that all minds are not created equal.

A person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can understand, no matter how many time and notebooks they have. Intelligence means being able to get the right answer the first time, not after a million tries. Even if you could program a human being to ape the understanding of superintelligent thoughts, they wouldn’t be able to come up with equivalent thoughts on their own or compare those thoughts to similarly complex thoughts.

Comments

  1. “I think there’s a limit to this process of Copernican dethronement:”

    —> translates to

    “I hope there’s a limit to this process of Copernican dethronement! Let me see what arguments I can find that support that conclusion.”

    See Motivated Cognition

  2. bob

    Just like a human you can program a computer to be fooled. Any higher intelligence is going to be looking at the same physical laws we are. We are not the center of the universe, but the universe is the same where ever you look. So it is possible that we would be able to understand anything that that “higher” intelligence could come up with… given enough time and processing power. And that is all we really can talk about right now: processing power.

    The fact of the matter is that we have not made even the fist step toward the singularity. There are no intelligent machines and until there are we are only arguing about the ability of humans to utilize more processing power. There are machines programed, by people, to look intelligent. Until a computer wakes up and programs another computer completely independently of human interaction there really is only processing power; which may double exponentially but you only need to play a video game to see the current intellectual reach of computers.

  3. Michael, you can’t attack a vague statement like this. There is a tremendous chance that some of the assumptions (disambiguations) you have to make in the process are incorrect about what Egan meant, and so in specific arguments you are slaying a strawman rather than something anybody considers seriously, even if overall your point is valid against the intuitive impression that his remark leaves.

    For example, if you do have a really big notebook and amount of time, you can just be the finite automaton part of a Turing machine, storing all the complexity in the notebook — it’s hardly an optimal way of doing this, but I’m sure it’s possible in principle, because you can reduce programmation of a computer and later execution of the program to this process. The trick is of course that the notebook is going to be the actual mind behind the process, not the finite machine that is you. And Egan clearly doesn’t consider the notebook to be made of paper.

    There is a long of appeals to intuition in this post to counterweight Egan’s remark. The appeals are no less anthropomorphic, and in light of the perspective I described in the above paragraph are made on a wrong level. To top things off, you are referring to a ridiculously obsolete and from the current perspective dangerously naive (even if inspiring) “Staring into the Singularity”.

  4. A person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can understand, no matter how many time and notebooks they have.

    That’s an interesting statement, and on immediate consideration, I thought that might be plausible, but I can’t think of any examples, or even evidence that might be true.

    Do you have any?

  5. Benjamin Abbott

    I too question that notion, as well the validity of IQ tests in general. Both the nature of intelligence and how to measure it remain contentious topics.

  6. Improbus

    “Is Smarter-than-Human Intelligence Possible?”

    One would hope so. Most of the humans I know put very little effort into being smart. I blame it on our primate heritage. If it wasn’t for the minority of us that seek new and novel experiences we, as a race, would still be living in caves, wearing skins and banging rocks together.

  7. The mysticism that someone with the IQ of 140 can grasp something what somebody with the IQ of 139 can’t … is a symptom of a lower IQ, I would say.

    Even a man with a lower IQ CAN understand _this_ argument and see, that his view was wrong.

  8. It takes a certain amount of additional explanation to someone with one point lower IQ, but he will understand it eventually if he wanted to.

    We can make a line to Egan and he will understand it also. If he wants to.

  9. >I regard this as garden-variety anthropocentrism and basically the heliocentrism of cognitive science.

    Surely that’s “the geocentrism of cognitive science”, though I could see a case for either.

  10. What a fantastic post! I come here after ended up frustating to get some material for my task, thank you so much I build a blog for share some Information, feel free to come.

  11. I got this web page from my pal who shared with me about this website and now this
    time I am visiting this web page and reading very informative articles here.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>