Friendly AI — May I Check Your Philosophical Baggage?

Say that you’re a venture capitalist and some researcher-entrepreneur comes up to you, pitching an idea for a new battery that costs the same as current lithium-ion batteries but holds 50% more charge. You’d evaluate that technical proposal on technical grounds — reviewing your own knowledge of the physics of batteries as well as consulting others who are scientifically well-versed in the matter to determine whether the proposal is possible. Near-term feasibility would also enter into the equation — if the battery took a very long time to make, then you might prefer to invest in something else with a near-term payoff.

Now consider the proposal that many transhumanists are putting forth — that we should create Friendly, roughly human-equivalent AI with the capacity for recursive self-improvement. Obviously, this is a more ambitious project that creating a better battery. Still, like the battery, it deserves an evaluation on technical grounds to the extent that this is possible. Unfortunately for advocates of Friendly AI, this proposal is also subject to obstacles of questioning that the battery proposal is not — evaluations based on philosophical baggage. Batteries aren’t loaded down with much philosophical baggage, due to their near-indifferent political relevance and general absence of moral valence.

The philosophical baggage around the possibility of self-improving Friendly AI is abundant. Even the very notion of human-equivalent AI offends human sensibilities, not to mention the idea that morality is something that can just be… programmed. To scientific materialists who believe in functionalism, many of these questions are non-issues, but Friendly AI is still subjected to scrutiny above and beyond a straightforward technical proposal, much of it justified. Ethical questions like, “would it be right to create something smarter than us?”, feasibility questions like, “can a being of intelligence N create a being of intelligence N+1?”, and so on. These numerous questions make arguing that Friendly AI is feasible and a good idea very challenging and multi-faceted. A philosopher might do well to take up the issue as an interesting challenge even if they had no personal attachment to it.

Exploring the philosophical obstacles around acceptance of Friendly AI is a topic worth 1,000 posts at least, so I won’t go into detail here, merely call attention to their existence. I will just mention the second challenge in the battery proposal as applied to Friendly AI — the near-term feasibility issue. As Ray Kurzweil argues in The Singularity is Near, if brain-scanning resolution and computing power continue to improve at the rate they have for decades, then we will have the technology to upload detailed functional algorithms of the human brain by around 2030. Arguing against this upper limit requires either stating that 1) brain scans and computers will halt their exponential improvement at some point in the next 20 years, 2) extremely subtle and complex low-level features must be duplicated to duplicate human intelligence, or 3) human intelligence cannot be simulated in a computer, even in principle.

I feel that arguments #2 and #3 can be dismissed rather readily. I also think that much of the motivation for standing behind #2 comes from a milder sentiment of #3, namely that if human intelligence can in fact be simulated in a computer, we should expect it to take hundreds of years for the task to be accomplished. That is the position of Doug Hofstadter. Those that take this line of argument would be very disappointed if humans were “so simple” that simulating our minds in computers by 2030 would be possible. Rebutting this argument comprehensively would take numerous posts on its own — I hope you realize why I’m bringing up issues without exploring them each in detail. (I would be writing this post for hours and have no time for anything else.) I can just say that the complexity and awe of the human mind would be in no way diminished if it turned out that analyzing features “merely” at the neuron level — of which there are 100 billion — would be a sufficient level of description. There are arguments that can be made from the way that evolution works and cognitive science experiments that make it implausible that exabytes of specific sub-neuron details of brain biochemistry play an absolutely necessary and indisposable role in minds-in-general.

I think that a lot of the ideological fuel behind #2 and #3 comes from the misconception that AI would need to be a straightforward copy of the human brain to work at all. This argument is on par with the idea that someone would need to perfectly copy a bird to make a flying machine, or perfectly copy a mole or earthworm to make a digging machine. The reason that people consider it acceptable to apply to intelligence and not to digging is because of the mystical association of intelligence with a specific Gift From God, a unique cosmic quality only bestowed on Homo sapiens sapiens due to our unique specialness and spiritual importance. For some that have been raised to believe this since they could understand language, changing their minds might be outright impossible. They might also act like the targets in whack-a-mole, avoiding making specific claims about their deep-held view of human exceptionalism, instead extending arguments which sound superficially more plausible, but in fact are only advanced because of deeper reservations about the entire enterprise and its philosophical implications.

Still, say that I granted that intelligence was too complex to instantiate in computers for hundreds of years. Even then, I would argue that the near-term feasibility requirements usually considered so important for research proposals and venture capital-funded projects would be absolutely inapplicable in this case. Due to the earth-shaking implications of creating another intelligent species, especially one which would have the qualities of “instant intelligence, just add computing power”, pursuing a benevolent course for the technology would be justified, even if we didn’t think it would arrive for hundreds of years. Failure would mean the extinction of our species and its replacement by AI (a highly undesirable outcome in the eyes of many), while success would mean a tremendous injection of intelligence and wisdom into our society, wisdom that could be turned to humanitarian ends. Perhaps others are visualizing a smaller impact from “instant intelligence, just add computing power”, than I am, in which case ignoring it if it were centuries out might make sense, but I think the “we might as well ignore this” quality more often comes from evaluating Friendly AI as if it were a business idea with a 3-5 year profitability horizon. Viewing such a momentous and transformational possibility in that light is obviously inappropriate.

Comments

  1. I’m entirely comfortable with functionalism: what I’m interested to hear is what your definition of the word “intelligence” is, and why you think it’s so important.

    You talk a lot about “superintelligence” as if intelligence itself was already well defined and understood.

    Your argument seems to be that there are many different classes of “intelligence” – I accept this point – the design space of all possible minds is presumably collossally big and there is no reason to assume the particular brand of intelligence instantiated within the human brain-body system is anything but one local maxima.

    But what guarantee do you have that we will be able to find an intelligence that is *qualitatively* better at the human mind in Getting Things Done? That is: accomplishing goals and learning about the world.

    Note that I’m specifying actual accomplishments rather than just mental or cognitive ability.

    There is a difference between the qualities required to solve the world’s problems and improve the human condition and the qualities required to do a cryptic crossword in under thirty seconds.

    My point is basically this – when did intelligence become the sole or even prime arbiter of success, even within the narrow fold of human experience?

    The “intelligence explosion” hypothesis may even occur: but what good is it having the equivalent of a trillion, trillion, trillion Einstein’s *thinking* about how to solve the problem of conflict in Israel and Palestine?

    The focus on “intelligence” on its own is the key problem of the idea of the “intelligence explosion”-style singularity.

    I would suggest that the role of transhumanists should be to actively brainstorm ways to cope with the monumental social problems that will ensue once human augmentation really takes off, rather than simply hang around waiting for the guys in the AI lab to sort everything out.

  2. Intelligence is what humans have a lot of and other animals have less of: ability to communicate, imagine, invent, etc. It’s what’s let us take over the world in just 250,000 years.

    There are longer arguments for why qualitatively smarter-than-human intelligence is likely, but try this one — you’d have to assume something special for there not to be a greater level. There seems to be several distinct levels of qualitative intelligence between us and nematodes, so it would take a special assumption for there not to be any levels above us. This is applying Occam’s razor — making theories as simple as possible given the apparent evidence. Saying that the scale goes up and up and then suddenly stops right where we happen to be is adding a special (unnecessary) detail to our theories about intelligence.

    When I use the word “intelligence”, I mean interspecies-style intelligence differences.

    There’s nothing about the Friendly AI analysis that prevents you from brainstorming ways to cope with the monumental social problems of the future. Go right ahead.

  3. Anderson

    At this point, it’s less important that everyone agrees on the specifics (technology will advance regardless), than it is for people to just be interested.

    Until then, it would be neat if there was someway to change the format of the dialogue in the comment sections on blogs. You know, to streamline the conversations. For instance, what if there were separate comments sections (agreements/criticisms), or collapsible comments boxes. And add the ability to comment on someone else’s comments (which would appear directly under their post). Just some thoughts..
    Take a look at this article, for example: http://www.overcomingbias.com/2007/09/even-when-contr.html

    Yet another well-written piece, even well commented in parts, but you get the picture. :/

    Now for laughs, which become more and more scarce for those dealing with heavier concepts often. It’s understandable, but also a shame, because comic relief leads to positive thinking and enhanced clarity in one’s imagination.

    http://celtics.fandome.com/video/109548/Amazing-Dance-Caught-On-The-Jumbotron/?q=c

  4. @Michael Anissimov:

    Fair enough. I’m pretty sure I could keep bickering over definitions and precise meanings of words until the milk-creating robots are created.

    I think the point we disagree on concerning the importance of intelligence to human progress, and the precise nature of superintelligence can be put to one side for the time being, lot’s more work to be done… :)

    @Anderson:

    Yes – something like they have over at Slashdot.

  5. Anderson

    That’s close to what I was picturing, but this current format has it’s advantages as well. One flaw that I see in Slashdot’s, is that the comments must be clicked on and maximized, rather than just being able to collapse the individual comments that don’t pertain to whatever response a user is dreaming up. Both formats in their present state are functionally similar to maneuvering through rush hour traffic while talking on a cell phone.

  6. MCP2012

    The discussion so far has (appropriately) highlighted the fact that we need to pin the underlying concept(s) and conception(s) of “intelligence” down before we can go about instantiating it in a robotic system. And we also have to realize—as Ben (Goertzel), e.g., surely does—that intelligence *develops* and is keyed to *environmental* factors as well as “innate” s/w. (Not that I’m especially [neo]Thorndikean, much less Skinnerian, mind.) If one cloned Mozart or Bach, and yet he developed in a feral (or even semi/quasi feral) environment, one sure as hell wouldn’t get concertos out that entity! Same for potential (F)AGI entities.

    Three core aspects of intelligence qua intelligence, it seems to me, are: (1) The ability to abstract and form concepts, and (2) the ability to model the environment, model changes in the environment, model actions and plans, and then implement them, and (3) learn (from 1 and 2), and then, as needed, revise one’s plans/actions. Now this may not be the whole shebang—it may not be sufficient to even conceptually encompass, much less instaniate, intelligence—but, I should think, it is a core of necessary aspects or characteristics.

    One way of focusing on all this—which may very well, e.g., have already occurred to, say, Ben (his being a rather philosophically well-read young fella) is to concentrate, at least in part, on the necessary-&-sufficient conditions to instantiate an entity that can form and implement PLANS. The go-to guy for the philosophical analysis of plans is Mike Bratman of, if I’m not mistaken, Stanford Philosophy, although there are many others who’ve made significant contributions over that last 20ish yrs.

    Just my off-the-cuff 2 cents… ;)

  7. Wow!!! Adore the depth and reception shots!! Oh, and everything in in between! Attractive.

Trackbacks for this post

  1. Links for 18th February 2009 | Velcro City Tourist Board

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>