IA vs. AI, Again

My work was recently cited by Remi Sussan at the Greek online journal Re-public, in an article “Transhumanism and Hermetism”. The relevant passage says:

Are there any cybermarcionists? We watch them being erased in various currents about “singularity” who suspect that the real birth of transhumanity will occur with the creation of an intelligence, superior to the human being. This superior intelligence could be a mutant human being but for some,[10] the human brain is structurally too defective to allow the passage to a superior level. Only an artificial intelligence, maximized from the beginning, can allow this “singularity”.

The citation, [10], refers to a footnote that says, “See, for example, Michael Anissimov, Forecasting Superintelligence: the Technological Singularity. I want to point out that nowhere in my article do I say that “the human brain is structurally too defective to allow the passage to a superior level.” That claim is completely false. I only say that it seems likely that AI will cross the line into superintelligence before intelligence amplification (IA) does, for various reasons that are listed in the article. I absolutely do think that the structure of the human brain is amenable to intelligence enhancement, but it seems like enhancing the incredibly complex biological brain is a difficult challenge that would be most easily and safely approached with the assistance of strong AI.

I would further like to add that I believe there is a very strong bias that leads people to exaggerate the potential of IA over AI in their minds, because IA is 1) more personal, 2) has greater positive affect, 3) is more easily imaginable, 4) more flattering to humanity, and 5) featured more heavily in science fiction, to name a few reasons. I consider it possible that IA will lead to superintelligence before AI, and I am in favor of ethically cautious IA research, but it seems (to me, anyway) like seed AI is more likely to lead to strong superintelligence before any IA technologies.

Let me also point out, though, that there may be people in favor of IA over AI that have successfully sidestepped the above biases. Such people should feel free to share their arguments.

Comments

  1. Thuris

    Certainly the ceiling for AI, when looking out several decades, would seem to be higher than that for brain enhancement. Still, it might be hasty to declare outright that a machine will be first to attain the equivalent of a 220 IQ (S.D.), to pick a goal just out of our current reach.

    I just hope we can get ourselves a bit smarter and a bit more “right with the world” by the time a recursively improving AI shows up, because our guidance of it, example for it and decisions about it would be that much wiser.

  2. > seems (to me, anyway) like seed AI is more likely to lead to strong superintelligence before any IA technologies.

    Man, once you get into trying to design AI, you realize just how dauntingly tough it is!

  3. I will once again put in my two cents: I am of the opinion that the development of near-human-equivalent AGI is all that is required to develop viable means of augmenting a given human being.

    If the goal is simply a greater-than-human intelligence, IA seems the more likely route, as it simply requires less total labor (and roadmaps for both are eerily similar) to accomplish than the creation of a greater-than-human AGI.

    After all; how is a recursive intelligence that starts out //less// intelligent than us even a concern? There is nothing in this statement which indicates it could recurse to “post-human” levels.

    There is indeed a great deal of bias here. I admit to mine — Michael, can you admit to yours?

  4. Ian, I’m not sure I’m biased — I have no particular reason to favor AI over IA ideologically. It would be great if it turned out that IA was definitively easier. I am more receptive to the idea of IA than I used to be. I am willing to totally change my mind if given the right evidence. I also don’t claim that every IA advocate is biased, just that strong biases tend to exist. You may very well have an unbiased evaluation that IA is easier than IA.

    I realize that AI is extremely difficult, but I think that substantially enhancing human intelligence would also be quite difficult. The only nearly sure-fire way of doing it that I can think of would be iterated embryo selection, which could take 50 or more years to roll out.

    Having a position doesn’t necessarily entail bias — I mean bias here in the sense of information that shouldn’t be entering into the evaluation. There is a body of evidence in favor or against both propositions, and our disagreement may stem from possessing different areas of that knowledge, not having any fundamental bias. If we were to share the same evidence, our positions might converge.

  5. Arnie Christianson

    I understand that AI, as envisioned by Michael and others, would necessarily be the ultimate end-product “last invention” superintelligence, but I don’t fully understand why IA, which to me seems to be pretty ubiquitous even now (and can only continue to improve through ongoing hardware and software augmentation) is not looked upon as a “gateway” to true General AI. I understand its inherent limitations in the big picture, but it seems as if many in the AI field dismiss it out of hand.

    My point, which I am sure has been made before so apologies for the redundancy, is that it would appear that at least in the near term it is going to be easier to augment existing meat-based intelligence as a “bridge” to full-on non-biological AI.

    Neurologically, at least, there doesn’t seem to be anything overly miraculous or mysterious about amplifying our existing intelligence quite a bit. I would say that we’ve done so exponentially already just in the last 15 years or so. Not in pure IQ, perhaps, but in pure speed and power.

    My knowledge of neurology exceeds my knowledge of AI/CompSci, so apologies if this question seems elementary to some.

  6. > “I realize that AI is extremely difficult, but I think that substantially enhancing human intelligence would also be quite difficult. The only nearly sure-fire way of doing it that I can think of would be iterated embryo selection, which could take 50 or more years to roll out.”

    Actually, it may be possible to do iterated embryo selection quite quickly. You can give a child an IQ test fairly early, and women can give birth at 12, right? So in 50 years, you could have (with a $100 billion total budget) 5 generations with a population size of maybe a few thousand… This could work especially well if you combined artificial selection with genetic engineering.

  7. I realize that AI is extremely difficult, but I think that substantially enhancing human intelligence would also be quite difficult. The only nearly sure-fire way of doing it that I can think of would be iterated embryo selection, which could take 50 or more years to roll out.

    I can think of another route which would be rather simpler, and has the added benefit of already being actively worked on by at least one researcher.

    Neuroprosthetic integration at the hippocampus. We already know that the brain tends to use tools as extensions of the body; we also know that the brain will adapt sensory regions of the brain to adjust for sensory inputs. We know, too, that the brain routinely integrates new cells throughout its entire lifespan. Therefore, the only real stumbling blocks are the creation of a “translation” device that can translate between digital and “neuronal” inputs/outputs, and suppressing the formation of scar tissue around the implant.

    This is a workable road-map. This is theoretically fully understandable. Contrastingly speaking, AI… we still don’t know enough to even know what it is that we don’t know about AGI. Dr. Berger, for example, is working on his artificial neurons for people with localized brain damage.

    And before you try to stop me by saying that this would not be “truly” general augmentation of human-level intelligence… I’ll just point out that a human being capable of memorizing at nearly-instaneous rates, who is also capable of developing rote skills at an almost instantaneous rate, and is possessed of near perfect recall, as well as computer-fast computational ability and logic processing… is indeed a transhuman intelligence.

    I cannot, personally, see the technological blocks to this lasting longer than Dr. Berger’s projected time to completion of his project. Or, failing that, longer than the CTO of Intel’s promised date of the Singularity. (Which I’ll remind is just 34 years away. 2043.)

    But I’ll stop now; ’cause all I’m doing is grinding an axe. :)

  8. jordan

    My intuition says that evolution has likely pushed in multiple directions to increase intelligence. I wouldn’t be surprised if there are dozens of different genes responsible for increased intelligence, of which no one person has even a majority. If that’s the case then genetic engineering could yield a huge spike in intelligence.

    At Roko: why wait for the girls to reach puberty? My understanding is that women are born with eggs. You can extract them before puberty is reached and use a surrogate mother. If the IQ test is at at age 5, you can get in 10 generations in 50 years. Also, while still incredibly unethical, it actually seems plausible in my mind that some country would carry it out. I can’t imagine any group impregnating 12 year olds in giant batches.

  9. 这个要顶一下的。

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>