Inevitable Positive Outcome with AI?

For those who believe that human-level AI isn’t far off and that a rosy scenario isn’t inevitable, 2009 is a somewhat sad and depressing time. Popular opinion is that AI won’t be here for centuries, but that isn’t a huge problem or issue. (In fact, it makes things easier by limiting the number of people involved in AI research, thus allowing me and my confederates to keep a closer eye on them.)

What is disturbing is the medium-sized and growing group of folks who believe that AI could be here within a few decades, but that the challenge of programming it for benevolence or moral common sense is trivial or already solved. I’m currently reading Wendell Wallach and Colin Allen’s new book Moral Machines: Teaching Robots Right from Wrong, published by Oxford University Press, which is arguably the first actual book on Friendly AI. In the book, they mention that every time they talk to people about the challenge of AI morality, they hear “didn’t Asimov already solve that problem?” This is silly in more ways than one, the most obvious being that Asimov made up his list of laws with the intention of them breaking down, to provide fodder for the stories. Anyway, Anissimov is telling you that Asimov didn’t solve the problem.

Another common error, rampant among transhumanists, is that human beings will magically fuse with AI the instant it is created, and these humans (who they are obviously imagining as themselves) will make sure that everything is fine and dandy. Kurzweil is the primary source of this fallacy. This belief has the added benefit of making humans feel important, giving them a guaranteed role in the post-AI future, no extra effort needed. Technology makes it happen — automatically. This helps heal the anxiety inherent in transitioning between a human-only world and a world with much greater physical and cognitive diversity.

Problem is, it doesn’t make sense. While it is possible that the first Artificial Intelligence will be created in a way that it is completely at the service of augmented human(s), it seems highly unlikely. Here is why.

1) Most new technologies are created as stand-alone objects. It would be incredibly more difficult to create a technology completely fused with the deep volition and will of a 100 billion neuron human brain than to just create the technology by itself. Is it easier to create a toaster, or create a toaster whose every element is in complete harmony with a human being, who views the toaster as an extension of himself?

Because AI is complex, mysterious, and has to do with the mind, people seem to assume that making an AI and making an AI that is a harmonious extension of human will are close enough that the latter would not be much more difficult (in some cases, required) than the former. Seriously, there is probably someone reading this right now that actually believes that AI will only be possible if it is created as an extension of human brains. This is because they see humans as the source of the “special sauce” of all that is good, holy, and intelligent, and find it impossible to imagine a stand-alone artifact displaying intelligence without direct and constant human involvement. This is anthropocentric silliness.

2) Human biological neurons are not inherently compatible with silicon computer chips and code. This is a pretty obvious one. Perhaps some thinkers can only imagine AI being created in the exact image of humans, after exhaustive research of the brain, so if AI is possible, then perfect human-computer interfaces should be too. But, was the first flying machine a perfect copy of a bird? No. So why should we expect the first AI to be an exact copy of ourselves? Even if it was, connecting a human being to an AI in a close and intimate way would not be a 1-2-3 endeavor. It would make complete sense if the first million attempts only result in some insane or non-functional amalgam. In the space of all mind-like data arrangements, only a tiny sector corresponds to what we would consider as normalcy. We are fooled into thinking that a large portion of this space contains normalcy because evolution killed off most of the non-functional or insane brains millions of years ago. We see the (mostly) positive outcome, we don’t see the quadrillions of failures.

3) The way things are going now, the first AI is likely to be created for some niche, money-making application — like predicting stocks or planning battles. Cognitive features that are superfluous to the crucial activity at hand will be postponed to implementation at a later date (if ever). The problem with this scenario is that basic goal-formulating activity in these AIs will likely lead to spontaneous attempts at the accumulation of power and the concealment of that accumulation from those who might threaten it. Paranoid? No. This category of behaviors is sometimes known as convergent subgoals — basic goals that make everything else easier, so most minds pursuing goals that require matter and energy would have an incentive to fulfill them. Unfortunately, it seems nearly impossible for anyone to wrap their heads around the idea, leaving 99% of futurists with completely anthropomorphic notions of how Artificial Intelligence will behave.

Blind optimists like to imagine AI popping into existence completely functional, reasonable, human-like, and ready to help out around the household, chatting up little Tommy just like any member of the family. If the first AIs are not like this, and are instead monomaniacal day traders, then they presume that such AIs will be kept in check until the day that Rosie the Robot Maid is online and ready to go. However, it needn’t be the case. Like the supercomputer in Colossus: the Forbin Project, the monomaniacal day trader might find itself thinking so far outside the box that it decides to take control of the entire stock exchange, or even the world economy, and manipulate it precisely to maximize its personal utility, meatbags be damned. What to a human day trader would seem “absurd” would seem “obvious” to an AI with very little background morality or understanding of the nuances of human values and meaning. While a human philosopher might spend hours upon hours debating the fine points of morality, a recursively self-improving AI might simply say, “Why argue? I already know what good is. It’s 45 lines of code that forms the top level of my goal system.” The human philosophers might then say, “But Kant said…” as they are steamrolled over for extra space.

We have spent so much time dealing with humans that we assume that human psychology is typical of minds in general and that humans are the center of the cognitive universe. In much transhumanist futurist lore, nascent AI minds are portrayed as practically falling over themselves to seamlessly merge together with us and create a Kurzweilian Utopia, and that AI morality is as simple a matter as turning a switch from “Naughty” to “Nice”.

AIs will not automatically merge together with us and become extensions of our mind, like friendly cognitive light sabers. Minds do not slide into each other like legos. There are early efforts to make an AI goalset that does actually serve as an extension of the minds of humanity, but it remains to be seen whether this can be translated into actual math, and whether or not the specific implementation in a space of 10^120 possibilities actually provides the desired outcome as planned.

But, it’s worth trying. Of course, human intelligence enhancement should be pursued too, and narrow AI may have a role to play in this, but if human intelligence is as difficult to enhance as I think it is, DARPA will have developed AGI long before we can give old Lenny a smart pill to turn his hillbilly mind into that of a theoretical physicist.

(If you found this post useful, consider donating to SIAI, the only group working on this stuff in an organized manner. In return, you get bragging rights.)

Comments

  1. Michael, let me only say that I continue to assert the belief that your vision of AGI is far too coding-centric to bear any sense of the reality of early-stage AGI, let alone what the “human memosphere” will have to say about how AGI ought to be developed into a recursively improving AGI.

    It is nigh-unto tautologically certain that non-recursive, subhuman AGI will be developed before AGI will. It is also nigh-unto tautologically certain that initial AGI’s will not be built on platforms that allow for optimization.

    While your points are valid as overall concerns, I feel that they are self-resolving by and large; by the time the concerns manifest as problems, there will be sufficient diligence on the matter required by the public will.

  2. “This category of behaviors is sometimes known as convergent subgoals — basic goals that make everything else easier, so most minds pursuing goals that require matter and energy would have an incentive to fulfill them”

    – then there are people who think that convergent subgoals are what really matters anyway. I used to be one of them! Not any more, totally with you on this one.

  3. I could see with the emergence of an artificial intelligence program, it could easily lead to a vast inequality of wealth. The owner of an AI program could conceivably become extremely wealthy depending on what kind of AI it was. Lets say you had an AI program that could write very convincing written material. Perhaps the AI program could also rewrite news stories into an original article. Then the person who had the AI program could start a massive amount of blogs/websites. They could use the AI program to help them get higher page rank to specific sites. They could copy the AI program onto as many computers as possible. So they would basically have a slave army of AI programs publishing new material all the time. They could use all of that written material to quickly climb in the search engine positions by having the AI carefully calculate the proper linking strategies among the blogs in addition to continuously publishing a stream of novel material. Eventually a single person using an army of AI programs might be able to dominate multiple keyword terms in the search engines. Perhaps they could dominate almost every search term imaginable. This would allow them to make a huge income stream. The AI program wouldn’t even need to necessarily be that intelligent, conscious or sophisticated. Of course the AI program would at the least have to make good written material that was unique and novel. It would really have to be good enough to lend itself to getting links from regular people, so as to allow the individual websites/blogs to accumulate page rank and visitors. It seems like it might in theory be possible too for a single person using an army of AI programs to form a “singleton”, which would be basically a top level control over everybody else by the person who owns the AI’s. Controlling what people searched for, for instance, would allow a person to manipulate what people see and also buy. A person with an AI program might eventually be able to use the computer data that they gained from dominating google searches to analyze the behavior of people or perhaps the spread of memes in the population. It could use computer simulations to figure out what people were going to do in the future and use that data to shape their behavior. Perhaps it would be somewhat difficult for a person to form a singleton, but I think the wealth inequality is a definite possibility. Especially when computers allow instantaneous money transactions. Of course I’m making the assumption here that the AI is not conscious and would require a person to use. An AI forming a singleton by itself is also a possiblity, though. Human beings could merely become cogs in the workings of a “superorganism” where group evolutionary fitness takes over as a result of the singleton gaining power.

  4. “It is nigh-unto tautologically certain that non-recursive, subhuman AGI will be developed before AGI will. It is also nigh-unto tautologically certain that initial AGI’s will not be built on platforms that allow for optimization.”

    Those are some strong claims. Do you have appropriately strong arguments to back them up? I don’t see how the currently publicly available materials on AGI research justifies such statements.

  5. Michael:

    Thank you. Quite simply, this is the best piece I’ve read so far on this subject.

    Be at peace,
    ddjango.

  6. IConrad said, “While your points are valid as overall concerns, I feel that they are self-resolving by and large; by the time the concerns manifest as problems, there will be sufficient diligence on the matter required by the public will”

    @IConrad-Maybe so. I’m an average Joe on this subject compared to most of you all; but an essay like this does get me engaged and thinking about the topic in ways that a higher level approach might not. If folks like me really are part of Michael’s intended audience then it’s good to bring these points up in this way since, as you say, the ‘memosphere’ we constitute is the same diligence demanding [future] public will.

  7. Sebastian Hagen wrote:

    Those are some strong claims. Do you have appropriately strong arguments to back them up? I don’t see how the currently publicly available materials on AGI research justifies such statements.

    Certainly I have them.

    They are drawn from the following points:

    1. Robust general intelligence requires an as of yet unknown number of computations to occur at any given time; the only current implementation undergoes at least trillions if not quadrillions of computations per second.

    2. Early implementations of any software design will be inferior to later implementations on the same principles.

    3. Human experience with implemented evolutionary/genetic algorithms has shown that regardless of the medium, they produce a more efficient (resources used per outcome desired) design (in general by orders of magnitude) than human beings can implement.

    4. Cognitive science is advancing at a steady rate, with a relatively slow pace — all things considered.

    5. Experimental processes and modeling are part of the method by which humans explore and develop.

    From the above: Our first implementations of AGI will be inferior to human BGI (Biological General Intelligence). This will be a necessary intermediate step between absolute dearth of knowledge and mastery. Hence tautological: no human endeavor of knowledge has ever progressed in any other manner. Even Einstein built on the works of Maxwell et al. Even Archimedes built on the works of those whom had come before.

    Given that there will be some years between early implementations and later ones, it is almost a given — human nature being what it is, and the forty-year computing trend being what it is — that subhuman AGI will be implemented economically/commercially long before human-equivalent AGI will be developed.

    There are, simultaneously, several modes currently being researched in AGI which would deny the product access to its own hardware. It could engineer and construct a new machine, but it could not optimize itself.

    I trust this sufficiently illustrates my points?

    Fessic wrote:

    Maybe so. I’m an average Joe on this subject compared to most of you all; but an essay like this does get me engaged and thinking about the topic in ways that a higher level approach might not. If folks like me really are part of Michael’s intended audience then it’s good to bring these points up in this way since, as you say, the ‘memosphere’ we constitute is the same diligence demanding [future] public will.

    I never know what Michael’s intended audience is. :) That being said… this is an ongoing conversation he and I have had. I simply felt it important to reiterate that his assessment is perhaps too negative. While philosophical and hypothetical conversations are useful and rewarding, their usefulness is generally enhanced by recalling their hypothetical/philosophical nature. And right now, this topic is squarely in that arena. Once we start developing and implementing subhuman AGI’s and start having enough knowledge to begin to even contemplate how to genuinely implement Friendliness in a coding basis — that would be another story.

    As to non-coded AGI; I reference the efforts to model the human brain w/ artificial neurons.

  8. Isaac J.

    Michael almost always makes excellent posts on unseen alternative perspectives.

    I agree with Michael. Why is it so difficult to realize AI is quite simply a new life form and thus completely unprecedented in its capabilities? Is it so hard to treat it as an alien and unknown variable intelligence? Beyond extreme caution is needed. Simply because scientists are assembling it and “birthing” it does not mean they will know what it is capable of let alone the speed to which it can adapt once “completed”.

    I may be seen as a pro-sapien bigot but I’d rather have some human-centered advancement whether it be genetically engineered or more implant based despite its difficulty.

  9. Andrew Dun

    This discussion seems ultimately to revolve around some class of narrow AI rather than genuine AI.

    First, a tautology: genuine AI is a form of intelligence. Let us assume that the following claim is true:

    (General) Intelligent systems are volitional.

    The problem being alluded to then is the problem of balancing competing volitions among intelligent systems. At this point the origin and basis of the intelligent systems in question are rendered irrelevant. Artificial, natural, evolved or constructed, the process of volitional balancing has a well known name: politics.

    AI development specifically, and technological development generally, will help to ensure an increasingly political future. This is a development that should be welcomed by any democratically minded individual.

    The difficulty that we are likely to face is that the greater the boundaries we put on the volitional tendencies of an engineered intelligence (its ‘nature’) the more we will need to constrain its volitional, and hence intellectual, capacity.

    The more we try to engineer the ethics of a system, the more ethically superficial that the system is likely to be.

    If we want to engineer AGI, what we need to be careful to avoid is, in order of importance:

    i) Engineering ethically superficial systems with hostile natures.
    ii) Engineering ethically superficial systems with friendly natures.

    In either case, the resultant systems with be Frankensteinian creations of one kind or another. What we should strive to do, is engineer systems with significant volitional flexibilty; systems that will have the capacity for ethical sophistication.

    Beyond that is the point at which engineers down tools and conversation begins.

  10. Khannea

    So if any human being (or group of people – say bankers) with both access to an inordinately big amount of resources, are left without any discenible legal oversight can inflict trillion dollar damage on the worlds economies, why do we let them? Why aren’t these people in a prison yet?

    If I had speculated upon this happening in 2006, I would have been laughed at. A trillion dollars! Unfortunately we are talking plural in 2009. Trillions.

    The above bankers could engage in these transactions strictly because of in large part automated systems. They could crowbar their value system, or lack thereoff, and leverage the consequences onto you, and all other taxpayers. Well not me, I am not a nett taxpayer.

    So by having access to power and automated resources an unnaccountable and largely anonymous class of financial transactors could engage in arbitrary actions that damaged the world we live in in ways that will have negative consequences for decades.

    Money * automated resources.

    I’d say, the ‘automated resources’ above are necssary and should be under strict control. But equally so, the consequences of indiscriminate concentration of power is enough to warrant secial oversight.

    I KNOW that this statement goes against all ideology and political correctness we have today but alas people.

    We cannot trust on accumations of current absurd levels of affluence to be used arbitrarily (or incompetent, or negatively). I again advocate taking the levels of wealth that exist in our society and bringing them under democratic controls.

    A few trillion dollars of damage today might very will equate a few billion deaths in twenty years time. The system is beoming unmanageable – I say transhumanist organisations take responsibility and start looking at concentration of power – any power – as just too dangerous to leave to be used on a whim.

    A billion dollars is a potential weapon of massdestruction more surely than a suitcase nuke. Worse – in the current political system you might even use it, and get away scott free.

  11. IConrad wrote:

    3. Human experience with implemented evolutionary/genetic algorithms has shown that regardless of the medium, they produce a more efficient (resources used per outcome desired) design (in general by orders of magnitude) than human beings can implement.

    Do you have a source for this claim? I’m not familiar with the particulars of genetic algorithms, but from an out-system perspective, your claim seems rather subious to me.
    Almost all software in use today (I’m using “Software” in the limited sense of “code we run on human-designed computers”, excluding mammal brains and the like) is, in fact, directly designed and written by humans.
    Most changes to software development methodologies focus on how to better support/organize humans writing software – the design of better languages, or tools, or organizational structures. Good software development isn’t cheap, and there are plenty of companies I’d expect to jump on any low-hanging fruit to be gained here; but even so, GA aren’t really used all that much compared to conventional SW development.

    Given that there will be some years between early implementations and later ones …

    Where are you taking the “some years” from? Significant software development these days is frequently done on a timescale of months, sometimes less. I don’t see how you can be confident that it’ll take longer in this particular case.

  12. Sebastian;

    Do you have a source for this claim? I’m not familiar with the particulars of genetic algorithms, but from an out-system perspective, your claim seems rather subious to me.

    Genetic algorithms haven’t seen penetration into the world of software development yet, not really. This is the most I’ve seen; yet it still reaffirms my point.

    Where are you taking the “some years” from? Significant software development these days is frequently done on a timescale of months, sometimes less. I don’t see how you can be confident that it’ll take longer in this particular case.

    I see a major foundational issue with your approach here. The stumbling blocks in the development of AGI are not coding but rather theoretical in nature. Theoretical developments take years from one level of implementation to the next.

    You can’t code for goals that are completely unknown to you. Barring major, immediate, fundamental, radical, and absolute alterations in how the human experience operates: there is simply no way that what I wrote a priori, “Given that there will be some years between early implementations and later ones, it is almost a given — human nature being what it is, and the forty-year computing trend being what it is — that subhuman AGI will be implemented economically/commercially long before human-equivalent AGI will be developed.“, can be anything other than a factual assessment of what is to come.

    And there’s really no point attempting to predict Black Swans.

  13. michael baker

    computer scrolling insects

    a device that goes on top of a laptop mouse pad in the form of a insect or bug that scrolls around ontop of the finger pad and also scrolls the pointer arrow on the web page and clicks website links… and for desktop pc’s with the addition of a laptop inspired mouse pad that mouse the pointer with your finger and the new addition of the bug device that is independent of the pc and programmed to move the pointer and click web links…this device can be programmed to navigate the internet or navigate a 3 d world character through a 3 d enviroment…

    along with the scrolling search insects each avatar will have its own book downloaded into it sorta like dna the book perferably will be wrote by you the user or your favorite… this will allow for interactions between the avators and the enviroment… it to should be laced with dna for example the leafs could be daily newspapers from around the world…the easiest way for this to work is to have the bug search not the head line but the article or chapter… and the avatars will respond with the title of the article or chapter… evolution will proceed individully from this point…

  14. NoelS

    Dear Michael,

    I am sympathetic to many of the points you raise in this essay but I am even more pessimistic about human-level AI. For one thing, there really is no decent research that is pointing in this direction. There is a lot of speculation from the likes of Kurzweil but no actual hard nosed research (or very little). It won’t magically evolve itself out of current pragmatic AI research. I say this in the knowledge that I could be completely wrong of course – science is not about certainty and opinion.

    Where I have great difficulty with your essay (and I am being an academic here) is in point 2) where the argument gets a bit twisted. It made sense earlier to talk about the first human-level AI but what do you mean by the first AI? We already got the first AI in the 1950s. It is a term developed by John McCarthy for the 1948 Dartmouth conference to describe a field of research and many people have been writing Arificial Intelligence programmes ever since. It is merely a technical term that has been abused by scientists who believe way to much in science fiction.

    For me AI is simply a technical term for the great engineering we do in that field e.g. chess playing or “reasoning” or even behaviour based robotics. You seem to imply that it is something more. Do you mean “conscious” AI?

    You use the much cliched expression, “was the first flying machine a perfect copy of a bird?” This sounds fine until you start thinking about what “flying” could be an analogy for in this context. You give a kind of Slomanesque sweep that doesn’t really make a lot of sense when thought about in detail. Does the “space of possible minds” make sense without any reference to biological entities. We don’t even know what a mind is.

  15. Hi Noel,

    Thanks for visiting. There’s an annual AGI conference and a small-sized (~50) community of researchers and academics looking into the challenge of Artificial General Intelligence. So there is some hard-nosed research, but you’re right, not too much.

    By the first AI, I mean the first human-equivalent mind that runs on a computer. This is what “AI” was originally meant to mean and still means to many people, including the lay public.

    Yes… space of minds makes sense without reference to biological entities. Just because we “don’t know what a mind is” doesn’t mean that we can’t attempt to make one in silicon and very likely be successful. Are you someone who works in AI who just considers the idea of human-equivalent AI very implausible? You know that the people that founded the field of AI were working towards human-equivalent AI minds, right?

    If you accept “AI” as the future possibility of a human-equivalent or human-surpassing mind with its own independent volition and creativity, rather than just a software engineering field, most of your points are sidelined. Is that the main disagreement here?

  16. I know that the majority of the site readers may have learned the potency of a social media marketing system. What’s more, a lot of people may have learned this advertising by means of advertising and marketing could skyrocket your online search engine serps and even transform your online reputation. Everything that the majority of the online surfers don’t know will be -how they can ” tame ” the potency of advertising and marketing with their smaller firmrrrs advantages. These are simply 6 tips which often can direct you towards your own social media marketing tries. The most important thing to recollect on the subject of social media marketing is that you want to create a useful and even completely original unique user generated content. It is just a types of subject material exactly who need to tell his or her close good associates and even readers on the sociable single pages. Make subject material which is beneficial and even helpful, often noone need to write about the idea. Interesting subject material is going to drive a person’s eye from your supporters and even followers. They should most probably like to write about the idea, that will build up your probability to obtain more readers and even potential customers. Give space pertaining to research and even glitches inside your advertising and marketing marketing strategy, particularly in first. Wait for what’s and what’s not working. Test out every thing. Cause differences as critical. An element that successful pertaining to other enterprises would possibly not be right for you. You should learn on your own research and even faults. When making use of these marketing, you could have to modify and even refresh your own center and even pursuits consistently. This way you are likely to stay with concentrate on. Really exist a various arguments that can have your own advertising lower unforeseen approaches. Therefore it is better to re-evaluate the actual track it is consistently. Apply it often. Check your ads to see what one is a very prosperous. xrtiu6fd8gf

  17. Not clear on what you might have in mind, Laila. Can you give us some more information?

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>