Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we’re likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

“Human-equivalent AI is unlikely” is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I’ll explain why in the next post.) Stross might not understand that the term “human-equivalent AI” always means AI of human-equivalent general intelligence, never “exactly like a human being in every way”.

If Stross’ objections turn out to be a problem in AI development, the “workaround” is to create generally intelligent AI that doesn’t depend on primate embodiment or adaptations.

Couldn’t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

I don’t get the point of the last couple sentences. Why not just pursue general intelligence rather than “enhancements to primate evolutionary fitness”, then? The concept of having “motivations of its own” seems kind of hazy. If the AI is handing me my ass in Starcraft 2, does it matter if people debate whether it has “motivations of its own”? What does “motivations of its own” even mean? Does “motivations” secretly mean “motivations of human-level complexity”?

I do have to say, this is a novel argument that Stross is forwarding. Haven’t heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is “unlikely”, presumably indefinitely “unlikely”. In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn’t find much — mainly just Dreyfuss’ What Computers Can’t Do and the people who argued against Kurzweil in Are We Spiritual Machines? “Human level AI is unlikely” is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.

(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we’re creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense “conscious”? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers — it’s possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences … lest we inadvertently open the door to inhumane uses of human beings as well.)

I don’t think these are “showstoppers” — there is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don’t want my self-driving car to argue with me about where we want to go today. I don’t want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos.

All it takes is for some people to build a “volitional” AI and there you have it. Even if 99% of AIs are tools, there are organizations — like the Singularity Institute — working towards AIs that are more than tools.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human’s well-being, it’s no more likely to deliberately turn on it’s external “self” than you or I are to shoot ourselves in the head. And it’s no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

YOU want AI to be like this. WE want AIs that do “try to bootstrap [themselves]” to a “higher level”. Just because you don’t want it doesn’t mean that we won’t build it.

Comments

  1. PariahDrake

    Couldn’t agree with Anissimov more.

    So strange coming from Stross, who through his book Accelerando, solidified my understanding of the Singularity more than any other writer (not a big fan of Gibson, haven’t read Drexler, and only skimmed RK’s books because I found them dry and un-entertaining).

    Accelerando changed my view of Singulartarianism forever.

    Ironic.

    • Noetic Jun

      It’s exactly what I was going to say after reading this. Unexpected, weird and confuzzling.

      • Mitchell Porter

        And yet Greg Egan has also found it necessary to pointedly dissociate himself from believers in real-world transhumanism, hasn’t he? Just two people is not quite a trend, but perhaps it’s a distinctive phenomenon with an identifiable explanation.

        • billswift

          As many fiction writers, including Robert Heinlein, have pointed out, you don’t have to believe in something to write fiction about it. In fact, several have suggested that NOT believing in something helps to write a more interesting story.

  2. Matt

    “”" Why not just pursue general intelligence rather than “enhancements to primate evolutionary fitness”, then? The concept of having “motivations of its own” seems kind of hazy. “”"

    A capital idea, if you can define general intelligence in an objectively testable way before the fact and hold the critics to it. Every AI triumph — Deep Blue, DARPA Grand Challenge winners, Watson — is inevitably dismissed as “not really intelligent” because the machine is operating deterministically rather than by ineffable mystic principles, and because they don’t solve problems the same way humans do*. The slightly more respectable cover story they sometimes use is that driving machines or whatever aren’t intelligent because they aren’t general like humans; they don’t also sketch doodles, argue politics, and laugh at funny movies. These critics (dualists, whether closeted or out) can’t be satisfied by mere virtuosity. They’re demanding uncaused action, a little silicon soul to match the uncaused action that they imagine drives humans.

    The only thing that might satisfy them is emulating human behavior right down to all the quirks of real, evolved humans. Getting the critics to acknowledge machine intelligence isn’t so much a matter of getting machines to solve more and harder problems as it is of engineering the empathic response of the critics so that they accept machines as in-group members. A doll with lifelike appearance and simple randomized behaviors is much closer to social acceptance than a beige box that can do a dozen clever things but never smile or cry.

    *Or imagine themselves to do, as if our narratives about our own actions can be trusted to explain the underlying neurobiology.

  3. Ori

    Er, Stross clearly does not say that human-level AI is impossible. Hyperbole does not make for good rational argument.

    • He said “human-equivalent AI is unlikely”, presumably meaning unlikely in general. Does he mean 100 years, 1000 years, a million years, or what? Unless he means in the next 10-20 years, I find it a ridiculous comment. AI of human-equivalent general intelligence is nearly INEVITABLE by 2050, in my view.

  4. SuperAtomicWedgie

    Wow.
    Can I just say how much in awe of your hand-waving abilities I stand. I mean seriously how many years of practice have you had?

    “We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.”

    Really, ok so that explains SIAI’s meteoric rise in the AI community. Its influence is truly global having built many successful AGIs in the last years and having a research department lead by a grade-school dropout. I am inspired with confidence. If you were faced with going to jail for making an AI you would wimp out and quit. This is just another example of the irrational hyperbole that SIAI is famous for. All mediated by the high-school style popularity contest based rationality of Less Wrong.

    @Michael Comment
    “AI of human-equivalent general intelligence is nearly INEVITABLE by 2050, in my view.”
    Really, and what is your basis? Your opinion? Who cares, you are a nothing in the AI world it would be like a first grader saying that general relativity is wrong. You have no credentials and no actual AI experience please spare us your overly confident ignorant opinions. Your arrogance and ignorance offends me.

  5. Michelle Waters

    What’s human equivalent anyway? Beating people at StarCraft, raising a child, making good original music?

    While I’m somewhat skeptical of SIAI, I think the terms of the debate are really loose. I remember a Ken McLeod novel with the phrase, “Human Equivalent is a marketing term.” I wouldn’t use it because it is irrelevant.

    The questions seem to be
    1. Will there be an intelligence explosion?
    2. Should we worry about it?

  6. Guy Mac

    IMHO human-level AI is likely because there is & will continue to be a huge effort made to see it done. It will take genetic programming techniques (with humans as the arbiters of ‘fitness’ initially) to get human-level parity. But what about super-intelligence? Many singulatarians seem to imply that an AI will be able to determine how to achieve this, or that the matter is just one of more bandwidth and higher frequency. But the way to super-intelligence AI must, it seems, also be Darwinian (though of course potentially much more rapid than biological evolution). So I think it’s possible, even likely given enough generations (was human-level intelligence likely when the Earth formed?) but not simple, just as AI of any kind itself proved to be much more difficult than initially projected.

  7. There was a good discussion of this on Hacker News.

    http://news.ycombinator.com/item?id=2682651

  8. Joe Silva

    Predicting that we can’t figure out how things work has a poor track record.

  9. Stross makes a solid argument for keeping artificial intelligence as a tool. Given the danger involved, this course has merit. As understand, y’all SIAI folks think that task impossible and thus advocate for friendly AI. I’m not so convinced of the impossibility of a collective choice to avoid self-improving and independent AI.

    On a side note, reading Accelerando proved a disappointment for me.

    • PariahDrake

      It cannot be prevented through authoritarian means (regulation, laws, bans, etc.)

      Just like the War on Some Drugs, what will happen is that it will be forced underground, and then we’re guaranteed to let it get out of control.

      This may sound counter intuitive, but the best policy is to open source all AI projects.

      Keep it where EVERYONE can see it, then we can all do something about it.

      Absolute openness is the only strategy that will allow us to keep an eye on it.

      Anything else leads directly to an arms race in AI…..which creates the thing we fear the most.

      • The SIAI model assumes an arms race – make an AI God first to stop AI devils from emerging. I’m obviously skeptical of authoritarian controls of any kind, but unconvinced that means we should rush to create a superhuman boss to keep us safe.

        • PariahDrake

          The SIAI assumption is dangerous.

          I don’t think we should rush either, I feel more comfortable pursuing IA instead.

          However, there will be many people pursuing SAI around the world, out of plain sight.

          If we encourage everyone to share what they’re doing without fear of repercussion, then we at least have the chance to monitor it.

          When everyone gets to watch what’s going on, then there is a greater chance that anyone can step up with a solution or intervention before it’s too late.

  10. Michael, this comment isn’t really about the main thrust of your post, but rather just an thought that came to mind while reading . . . it sounds rather Biblical to talk of someone wanting to create persons that could turn on and kill their creator.

  11. SHaGGGz

    “I don’t think these are “showstoppers” — there is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.”

    The concerns Stross raised were primarily ethical, which this response completely ignores.

    • billswift

      Stross’s post was really sloppy. At some points he claimed we shouldn’t do AI for ethical reasons, but what he most seemed to be doing was using irrelevant ethical arguments to support his claim that independent AI isn’t possible.

  12. Shagggz, I feel that ethical arguments and plausibility arguments should be kept entirely SEPARATE. Otherwise people start intertwining moral objections with technical objections, and act like their moral objections have technical weight, or in any way change the objective difficulty of achieving AGI, which they don’t. I might address the moral questions in another post, if there is enough interest.

  13. Bo

    Damnit, Anissimov…

    > Stross must be one of the only non-religious thinkers who believes human-level AI is impossible

    He didn’t say impossible, he said unlikely.

    > There is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us.

    > YOU want AI to be like this. WE want AIs that do “try to bootstrap [themselves]” to a “higher level”. Just because you don’t want it doesn’t mean that we won’t build it.

    Is this seriously the kind of PR that the SI needs?

    • Aaron

      Well, Bo, if someone really is interested in finding a way to engineer my species out of existence, I’d much rather he tell me about it than not.

      • Bo

        It’s not like Anissimov or his SIAI friends are even interested in engineering humans out of existence.

  14. conditional logic

    I knew it! Programming languages are EVIL. Just look at them; makes your brain hurt.

    Code must be declared a munition and a biohazard.

  15. *facepalm*

    You know, I would point out that AI as it exists and as it will most likely exist will be a set of commands and instructions to perform specific tasks within specific rules in a way that deals with anything complex or repetitive that humans are liable to mess up by virtue of not having the brains to handle time-consuming, repetitive, and math intensive operations on a constant basis. But considering the zealotry displayed here, I’m not sure if I should invest any more time in making it.

    Computers are tools. This is what they’ve been designed to do, and we want them to work with us because that’s the best use for them. What do you want? A happy, go-lucky robotic friend like Robin Williams in Bicentennial Man? A computer that wants to listen to how you’re feeling or go off to discover a panacea for its human friends in a lab? That’s not how machines work. I’m terribly sorry there are mean people like Stross, or Sharkey, or hell, any comp sci grad student like me, or robotics researcher for that matter, pointing out the unsubstantiated suppositions in all those fun little theories of AI friendliness, but those are the facts.

    Your definition of human-level AI works in a sci-fi novel or a TV show. It doesn’t work in the real world for the simple reason that all that friendliness and social decorum you want AI to have is just fluff, a distraction to what it actually was designed to do. Why the hell would I want to build a computer with which I have to bargain, negotiate, or talk about the meaning of life if I can just program it to run through its paces and do it’s job when I double-click on an icon? Because I’m lonely? I have flesh and blood friends if boredom hits.

    And unlike some of the SI adherents, I don’t view my own mind and body with thinly veiled contempt and disgust, obsessing about its every last shortcoming. I’m much more interested in seeing how to use advanced machinery to make up for them and help humans do the things they want to do but currently can’t rather than trying to prepare for the arrival of some friendly magical machines that will do everything for us, the justification for their supposed inevitability based on sci-fi and self-depreciating cliches than real science.

    • Mike A.

      “That’s not how machines work.”

      Human beings are *also* machines… machines whose components are primarily complex carbon compounds.

      “Computers are tools. This is what they’ve been designed to do, and we want them to work with us because that’s the best use for them.”

      It’s what *we* want them to do, or what *you* want them to do? I also want information tools… *and* true artificial intelligences. I see no reason why we shouldn’t strive for both.

      • “Human beings are *also* machines… machines whose components are primarily complex carbon compounds.”

        Philosophically, we can call anything a machine. For scientific purposes, a machine is a device build and designed to perform a certain, intended function. Unless you believe that we were all designed by a deity or deities as tool for some divine end, humans emerged from natural processes for no particular reason and persist only because we can propagate ourselves.

        “I also want information tools… *and* true artificial intelligences.”

        What is a “true artificial intelligence?” And what is a fake one for that matter?

        • Mike A.

          MA: “Human beings are *also* machines… machines whose components are primarily complex carbon compounds.”

          GF: “Philosophically, we can call anything a machine. For scientific purposes, a machine is a device build and designed to perform a certain, intended function.”

          There’s nothing particularly scientific about the definition you give. Be that as it may…

          GF: “Unless you believe that we were all designed by a deity or deities as tool for some divine end, humans emerged from natural processes for no particular reason and persist only because we can propagate ourselves.”

          I don’t in fact, have a belief in the existence of a Deity or Deities, and yes, the evidence indicates that our species, as with every other current or past species, evolved from simple, common ancestors.

          My point, however, was that just because the computers we have built up to now lack sentience, or self-awareness does not mean there is some sort of natural law that demands computers can only be, or should only be, non-sentient tools.

          What I don’t understand is this: Why are you so opposed to the idea of a general artificial intelligence, a conscious, self-aware entity whose physical substrate doesn’t happen to be an instance of homo sapiens?

          • “Why are you so opposed to the idea of a general artificial intelligence…”

            Why do you defend something that you can’t even define? For all the talk about “human-level AI” and “AGI” I’ve yet to hear any real definition or benchmark of what would constitute this AI, and when such a definition was given to me by either Michael or anyone else who is a fan of or involved with the Singularity Institute, their attempt generally demonstrates an alarming lack of awareness of what it is that computers actually do. Which is bizarre since the Institute does actually have advisers who have a genuine academic background in computer science…

            “…just because the computers we have built up to now lack sentience, or self-awareness…”

            Actually, they don’t. Nowadays, only the most simplistic remote controlled robots have no sensors or routines for processing how they’re interacting with their environments. A Roomba is actually quite well aware of its size and its surroundings as well as whether it bumps into something or not, and whether it needs to recharge. See? Quite self-aware, much the same way that a small rodent would be.

            Again, at least having some idea where the state of today’s AI is and how it works would be a great precursor to any discussion about where it may go in the future.

        • Mike A.

          I’ll try asking this again: Why are you so opposed to the idea of a general artificial intelligence, a conscious, self-aware entity whose physical substrate doesn’t happen to be an instance of homo sapiens?

          You seem to have skipped this question in your last response, and it goes to the heart of your posts here, IMO. You seem positively offended that the idea of a general artificial intelligence, an entity capable of, say, holding a conversation, or writing a novel, or designing a building, an entity which is physically instantiated using something other than biological neurons.

          In short, what’s your problem?

          • “In short, what’s your problem?”

            I have three problems.

            1. Your lack of reading comprehension.

            2. Your inability to define what you mean when you utter what amounts to an empty catchphrase.

            3. Your stubborn insistence of substituting the act of reading what’s being said to you with amateurish and misguided attempts at psychoanalysis.

          • Mike A.

            Oh well… if you don’t want to address the question, then I guess you and I have nothing to discuss.

    • Mitchell Porter

      “I would point out that AI as it exists and as it will most likely exist will be a set of commands and instructions to perform specific tasks within specific rules in a way that deals with anything complex or repetitive that humans are liable to mess up by virtue of not having the brains to handle time-consuming, repetitive, and math intensive operations on a constant basis.”

      So in other words, even though a form of intelligence exists in nature that is self-directed and possesses general problem-solving capabilities and the capacity to acquire competence in entirely new domains of knowledge and activity, and even though there are whole branches of science devoted to working out how this natural form of intelligence works, and even though people enjoy making computers do every damn thing they can figure out how to make them do… no-one will ever succeed in duplicating this natural form of intelligence artificially, and no-one will ever seriously try to do so?

      “Your definition of human-level AI works in a sci-fi novel or a TV show. It doesn’t work in the real world for the simple reason that all that friendliness and social decorum you want AI to have is just fluff, a distraction to what it actually was designed to do.”

      I don’t believe you understand what Friendly AI is about. It is NOT about creating AIs which exhibit social decorum. It is about designing a decision-making system for an autonomous AI such that, at a minimum, it would be safe for the human race even if said AI acquired strongly superhuman powers, and at a maximum, it is about designing an ethically ideal “agent”. These questions have considerable overlap with the general problem of ethics for human beings – i.e. the perennial questions about how we should live, and how to make the difficult choices – but here they have the extra dimension that they pertain to a thinking system which we get to design from scratch, and which will potentially acquire cognitive powers exceeding that of any human individual.

      I assume you are not uniformly dismissive when human beings try to codify morality or otherwise figure out principles of right action for human beings. Similarly, you should not be dismissive when the same question is raised with respect to artificial intelligence.

      “And unlike some of the SI adherents, I don’t view my own mind and body with thinly veiled contempt and disgust, obsessing about its every last shortcoming. I’m much more interested in seeing how to use advanced machinery to make up for them and help humans do the things they want to do but currently can’t rather than trying to prepare for the arrival of some friendly magical machines that will do everything for us, the justification for their supposed inevitability based on sci-fi and self-depreciating cliches than real science.”

      OK, wait, you don’t view your mind and body with contempt, but you do want to use machines to make up for their deficiencies? I’m struggling to see the difference between your own outlook and that of the alleged pathological body-haters, except that the latter have more of a bad attitude about it. Well, you tell us that the latter also believe that “friendly magical machines” are coming, for epistemologically disreputable reasons that come from science fiction rather than from real science.

      In the discussion about Friendly AI, the issue about body hate and so forth is a total red herring, and I would refer you to my comments at the post by Charlie Stross.

      As for why one should believe that AI is coming… It doesn’t require much extrapolation, and it doesn’t require any temperament of optimism at all, or any faith in science fiction, to foresee that smarter-than-human artificial intelligence is coming. That can be anticipated just from the progress of science and computing technology.

      “Friendliness”, if it is realized, will not happen by accident. It will require people to pose the problem – what sort of AI would *be* “friendly” – and it will require them to solve the problem too. And you aren’t helping by misrepresenting the concept and scorning it for irrelevant reasons.

      • “no-one will ever succeed in duplicating this natural form of intelligence artificially, and no-one will ever seriously try to do so? “

        People will try but they won’t make it worth the way you seem to think with a von Neumann architecture. Natural intelligence took billions of years to evolve in hundreds of different ways and it took countless iterations for it to get there. The mismatch between how neurons transmit content and how computers do, well, anything really, is just too great for an abstract, general, do-anything intelligence to be built using even modern supercomputers.

        Though I understand that it seems inevitable and not really all that hard when you don’t know the full scope of the problem and we tend to operate under the assumption that if a lot of people invest a lot of time and money into something, they’ll find a solution. But it’s not always the case. Nearly a thousand years and countless tons of gold have been invested in perfecting exorcisms to cure disease. Why did they fail? They were barking up the wrong tree. Microbiology is what helped us start curing disease instead of merely surviving it.

        “It is about designing a decision-making system for an autonomous AI such that, at a minimum, it would be safe for the human race even if said AI acquired strongly superhuman powers…”

        We already do that with hard-coded limits and algorithms. The papers on friendly AI confuse computers with animals and go out of their way to create a system of operant conditioning for an AI they pretend we’ll know nothing about. But since we’ve built it, we know how to code its limits and unless computers become magical creatures no longer tethered to any extant technology, those hard-coded limits to them are like gravity is to us: a force with which we can only cope and which we’ll never overturn at will. Seriously, read some Asimov novels. Those are a much better manual for friendly AGI than most of the Institute’s papers on the subject.

        “I’m struggling to see the difference between your own outlook and that of the alleged pathological body-haters…”

        The “body haters” are those who want to zap their minds into computers and constantly lament about how their flesh is holding them back. I’m more of a “how can we improve this by using the body’s strengths” kind of person. If the nervous system can be good at interfacing with machines, then let’s go in that direction rather than try to find the nerd equivalent of a soul based on a very simplistic misunderstanding of how the brain works. They view the body as a dead end, a sack of meat they inhabit and want to leave. I view it as a great head start and a template for future possibilities.

        “In the discussion about Friendly AI, the issue about body hate and so forth is a total red herring…”

        Yes, but my comment has nothing to do with friendly AI per se and focuses on the attitude of the Singularity faithful. Just because you saw the words “friendly AI” mentioned, doesn’t mean that it’s the focus of the thought or the argument.

        “[superhuman AI] can be anticipated just from the progress of science and computing technology.”

        Just like we accurately anticipated interstellar travel by 2020 based on the progress in space exploration in the late 1960s and early 1970s. Everything went totally according to plan there and we had a lunar base by 1985, walked on Mars in 1998, orbited Saturn in a manned craft in 2008, and are now getting ready for the first probe to Alpha Centauri instead of dismantling our space program and desperately looking for a way to get back to low Earth orbit after the last flight of our reusable space plane. Oh wait… None of that happened.

        Or would you have me believe that advanced computer science is immune to the same kind of short-sightedness, carelessness, and political hostility as space exploration, and we get all the money we want for any project we want to undertake? Hey, if you have access to a magic wellspring of grants, why don’t I forward some of my draft proposals your way so you can fund my research and experiments? Thanks ahead!

        • Mitchell Porter

          “People will try but they won’t make it [work] the way you seem to think with a von Neumann architecture.”

          I think of “human-level AI” as first being achieved by distributed algorithms running on a network of hundreds of computers, unless some form of molecular computer becomes practical first.

          “Natural intelligence took billions of years to evolve in hundreds of different ways and it took countless iterations for it to get there.”

          Yes, it takes a long time to solve a design problem if you’re only allowed to make small random changes to the blueprint and have no ability to think about what you’re doing. Sorry to be sarcastic, but how long it took natural selection to create intelligence has no relationship to how much longer it will take the human race to recreate it.

          “The mismatch between how neurons transmit content and how computers do, well, anything really, is just too great for an abstract, general, do-anything intelligence to be built using even modern supercomputers.”

          Are you just talking about serial versus parallel computation again?

          “Nearly a thousand years and countless tons of gold have been invested in perfecting exorcisms to cure disease. Why did they fail? They were barking up the wrong tree. Microbiology is what helped us start curing disease instead of merely surviving it.”

          If we were proposing to create AI by making a clay golem and writing in Hebrew on its forehead, you might have a point.

          “We already [make programs safe] with hard-coded limits and algorithms. The papers on friendly AI confuse computers with animals and go out of their way to create a system of operant conditioning for an AI they pretend we’ll know nothing about. But since we’ve built it, we know how to code its limits”

          I think I see your problem. You’re probably thinking in terms of present-day organizational I.T., where the computers are mostly an automated clerical department, and good system design involves foreseeing the conflicting needs and demands of the end users.

          But “AGI” is about software that is far more “empowered”. It has to generate novel cognitive representations, be capable of planning and evaluating the desirability of outcomes… In this respect, it *is* like an animal. And while “operant conditioning” is *not* SIAI’s proposal, they are definitely talking about agents that adjust their decision-making in response to ongoing value judgments about the state of their world. Any finite purposive intelligence must employ heuristics which assign positive or negative value to entities, situations, and outcomes, because it has to make choices in a condition of partial ignorance. So it may sound a little anthropomorphic, but this is inescapable if you’re working with artificial cognition, and not just computation.

          “and unless computers become magical creatures no longer tethered to any extant technology, those hard-coded limits to them are like gravity is to us: a force with which we can only cope and which we’ll never overturn at will. Seriously, read some Asimov novels. Those are a much better manual for friendly AGI than most of the Institute’s papers on the subject.”

          Earlier, you assume that because we built it, we’ll know how to appropriately limit its powers for our own safety. A remarkable thing to say, given the reputation that complicated computer systems have for becoming incomprehensible even to their designers and programmers. I think the most that can be said is that there are practices which can minimize such risks. But this is inherently more difficult for a system that is intelligent, rather than just complicated, because its potentialities are so much greater. One simple problem is ensuring that the semantics of a “hard-coded limit” remain constant. We tell the AI “thou shalt not kill”, or perhaps “intentionally causing a human death has infinite negative utility”. Fine. What does it understand the meaning of “intentionally” to be? If *it* “does something intentionally”, that must mean that the action in question comes about because a particular submodule is in a particular state. So what if it simply identifies another way to achieve the bad outcome, which doesn’t employ the submodule in question? Such are the technical issues facing a person who wants to develop best practices for the creation of AGI. The only way to deal with them systematically is to go to a higher-level perspective in which we think about the AGI’s overall mission in life, which is why SIAI publications go on about looking for the biological basis of *human* value judgments: If you want to make AIs that are human-friendly, but don’t trust the ability of human beings to perfectly articulate the “human utility function” just on the basis of introspection, then you become interested in inferring what it is that people really want through scientific means.

          I have wandered some distance away from the topics you raise, but really, these are the core issues. We need to think about these matters as if we were unleashing upon the world an entirely new intelligent species, the design of whose “ethical DNA” is completely up to us, but which will be completely out of our control once they are up and running. That doesn’t mean that this development will be culturally understood in those terms when it happens. The NSA may just think it is creating an ultimate strategist; the NSF may think it is funding research into the computational complexity of linear algebra; Google may think it is creating a population of agents to manage your digital affairs in the “cloud” of 2015. But the hyperbole attached to the notion of a Singularity really is commensurate to the consequences of autonomous human-level AI.

          By the way, it might genuinely be a useful thing for you to list some of the relevant insights you say are to be found in Asimov’s robot fiction.

          “would you have me believe that advanced computer science is immune to the same kind of short-sightedness, carelessness, and political hostility as space exploration, and we get all the money we want for any project we want to undertake?”

          I might believe this scenario if networked computers were just a technological oddity, present only in a few, closely guarded, billion-dollar facilities…

        • Mitchell Porter

          By the way, if we continue, I suggest you start a new thread, this one’s getting a little tight.

  16. FalseCrypt

    “A munition and a biohazard”
    Not only code will be (silently) declared a munition and a biohazard, the researchers will become essentially enemies of the state.

    An inevitability?:

    Right after some company/individual declares they’ve made an AGI and demonstrates it being at least close to human-equivalent in generality beyond any doubt, agents empty the offices – for obvious national security reasons.

    They’ll also pick up everyone at the premises and everyone in address books and interrogate them until every last bit of code and hardware is found.

    From that point on the AI researchers become government employees or are put under 24h-surveillance for the rest of their lives so they don’t do anything funny with code or are simply imprisoned. Heck that happens with some hackers who are definitely not a nat sec concern.

    Only because the government considers everyone in AI harmless today do they let them continue.

    But I’m sure the AI guys are smart enough to have figured this out already.

  17. “Only because the government considers everyone in AI harmless today do they let them continue.”

    Yes, this is exactly why it has AI experts build weapons for them in the form of drone aircraft and prototypes of unmanned tanks. Totally harmless and very friendly…

  18. SuperAtomicWedgie

    Obvious ignoramus. *plonk

    • FalseCrypt

      Who, me or this Greg guy? He seems to miss the point routinely, perhaps intentionally.

      • Lovely. Very civil and tactful there Crypt. I get the point very well. It’s just that an awful lot of Michael’s readers seem to have only a cursory understanding of computing and assume that after reading enough posts on Popular Science and Singularity blogs, they know enough about how computers actually work to make grand pronouncements of the future of computing. And this leads to questions and statements that make those actually involved in computer science balk.

        What you call missing the point is simply my mind drawing a logical diagram of how an AI would have to perform the specified task and wondering how it will start thinking in broad and abstract terms rather than in bytes, which is exactly what its system is designed to do. This is you limitation. You have a certain number of bytes and call stacks being fed by a stream of assembly objects managed in a finite virtual and physical memory. Each element has to be there by a specific command or kept active by a certain loop.

        Give me some code samples for a machine that can make decisions untethered to its code or build its own ANNs and then we’ll talk. Until then, all the dreams of immensely powerful and incomprehensibly smart AGI inevitably over the horizon is much the same kind of fantasy as finding an abandoned pet dragon on your front porch to raise as your own. Bet hey, if you think you’re too smart to need to read up on computing, programming, and how machines handle propositional logic then who am I to spoil your self-confidence?

  19. New thread at request…

    “I think of ‘human-level AI’ as first being achieved by distributed algorithms running on a network of hundreds of computers…”

    Newsflash: we have those now. They’re called supercomputers and they’re used primarily to parallelize very time-consuming algorithms that require solving billions of unrelated or loosely related equations per second. They’re basically very, very fancy calculators. You can build very elaborate and complex logic on a run of the mill laptop and run it just fine from there.

    “But ‘AGI’ is about software that is far more ‘empowered’. It has to generate novel cognitive representations, be capable of planning and evaluating the desirability of outcomes…”

    Based on what we tell it is a good outcome. If it has no goal or a logical schematic for doing something, it will just cycle through its paces or shut down. This is not a limitation of today’s IT environment (which is replete with requirements that make life very difficult for many software architects), but a limitation of the way computers work. No instruction on the stack or no cycle with an entry point or an end = stack underflow and probable crash.

    “Earlier, you assume that because we built it, we’ll know how to appropriately limit its powers for our own safety. A remarkable thing to say, given the reputation that complicated computer systems have for becoming incomprehensible even to their designers and programmers.”

    Having worked on such systems I can tell you exactly how they get such a reputation. They become very large and since they’re edited by as many as several hundred people; each with his or her own styles, ideas, tricks, and shortcuts, as well as implementations of long revised or long abandoned requirements that get in the way, and prone to making mistakes, the end result is a vast and very messy patchwork. But if you have the patience and a good debugger, you can always get to the bottom of how that system actually does something.

    Just because an enterprise-level system is complex, doesn’t mean it’s become unknowable of its own volition. It just means it’s really big and got rather messy in the long process from inception to launch.

    “By the way, it might genuinely be a useful thing for you to list some of the relevant insights you say are to be found in Asimov’s robot fiction. “

    Most of his novels deal with the loopholes in his three laws of robotics, engaging in a kind of extremely high level logical debugging. I would be shocked if you haven’t read anything by him.

    “I might believe this scenario if networked computers were just a technological oddity, present only in a few, closely guarded, billion-dollar facilities…”

    Computers used today and computers that would be used to learn and grow into fully molded minds are light years away from each other. In fact, the latter don’t even exist yet and your supposition that networking computers can grow into being intelligent would imply that the web should host Skynet by now rather than a series of routers obeying a protocol that simply pushes dumb bits around fiber optic and copper cables. As we both know, this is not the case.

    • Mitchell Porter

      I’ll respond tomorrow (no time now), this is just a placeholder so you know to look again.

    • Mitchell Porter

      “Newsflash: we have those now.”

      I know that. But you were saying that I “seem to think” that AI will be created “with a von Neumann architecture”.

      “They’re called supercomputers… They’re basically very, very fancy calculators.”

      Any information-processing system looks like a fancy calculator if you only think of its state as a set of numbers. For example, PageRank becomes just a big exercise in linear algebra.

      “You can build very elaborate and complex logic on a run of the mill laptop and run it just fine from there.”

      But it will be very slow.

      “Based on what we tell it is a good outcome. If it has no goal or a logical schematic for doing something, it will just cycle through its paces or shut down.”

      I fail to see your point. Human beings are also goal-driven. Lobotomize a human being appropriately and they become incapable of acting. And *my* point was that the more sophisticated an intelligence comes, the more delicate a task the specification of its goals should also become.

      “Just because an enterprise-level system is complex, doesn’t mean it’s become unknowable of its own volition. It just means it’s really big and got rather messy in the long process from inception to launch.”

      Are you willing to say exactly the same thing about a human being? Because the problems involved with debugging a messy AGI should be comparable to those involved in “debugging” a human being.

      What you are saying is quite sensible, and yet for me it leads to the SIAI philosophy that in creating AGI it’s important to know what you’re doing, that you need to get the value system right, that you don’t want it to self-modify in a way that obfuscates, and so on. So, maybe you don’t have a substantial difference of opinion with SIAI on the topic of AI, you just don’t like the Singularity subculture?

      “Most of his novels deal with the loopholes in his three laws of robotics, engaging in a kind of extremely high level logical debugging. I would be shocked if you haven’t read anything by him.”

      I would have read some Asimov as a child. But if you (or anyone else) thinks Asimov matters for real-world AI (and maybe he does), then you (or they) should write an essay summarizing the lessons to be learned from Asimov. Spell it out for people, don’t just say “there is genuine wisdom to be gained somewhere within this shelf full of science-fiction novels”.

      “your supposition that networking computers can grow into being intelligent would imply that the web should host Skynet by now”

      I didn’t say it would happen by itself. The right hardware is a necessary condition for AI, but not a sufficient condition.

  20. FalseCrypt

    I didn’t meant to be uncivil. I just honestly think that while you seem to know computing principles, and especially because you do, you seem to be intentionally missing the points.

    All bits are dumb and so are all atoms.

    Brains, and bodies, any processes we call life and of course inanimate matter as well, are very very fancy calculators, machines made out of smaller machines that operate under strict, understandable rules, just like the bits in a computer, all the way down to quantum foam or whatever, if there even is a bottom to it. I’m open to changing my views if you have something that makes more sense.

    Intelligence is obviously parallel computing with limited serial processing performance requirements. We can add more processing nodes practically infinitely, eventually matching and exceeding the number of those in a single brain.

  21. gt

    Many of these arguments and those in response to Charlie’s post remind me of the old story of the blind men examining the elephant, each one so sure about what he was experiencing. One says, “It’s definitely a snake.” The other: a tree trunk, etc. The Elephant here is The Totality of Evolution including human beings, all of there inventions, creations, culture, ideas. (See Kevin Kelly’s “What Technology Wants) The “Singularity” is already happening. We pretty much can’t even imagine what life will be like a year from now. Okay, with some certainty, I can say I won’t be downloading my consciousness next year, but the Whole Big Ass Process is going to look a lot different in a year and this “process” writing this post (Me) will be quite different as well. People try to imagine themselves as they are now in the future, but our future selves will think very differently from us, they will BE very different from us, and their consciousness will be very different. It’s like imagining that folks back in medieval times thought just like us, and perceived “reality” just like us. But in fact there is a “singularity” of sorts that separates us from the medieval consciousness. They absolutely could not imagine or conceive of what it like to be a modern or post modern human being. The same goes for us and our future selves. And our future selves are not very far away.

  22. I definitely knew about almost all of this, but with that said, I still thought it was helpful. Nice work!

  23. This is what you phrase is unadulterated and terminated as najabrdziej with all of this in this at all events I agree.

  24. Hey there! I’m at work browsing your blog from my new iphone! Just wanted to say I love reading through your blog and look forward to all your posts! Keep up the fantastic work!

  25. Aw, this was a very nice article. In believed I wish to put in writing like this kind of moreover – taking time and specific effort to make a very excellent article… however exactly what do I point out… I waste time alot and never appear to get something carried out.

  26. Undeniably believe that which you said. Your favorite justification seemed to be on the internet the easiest thing to be aware of. I say to you, I certainly get annoyed while people consider worries that they plainly don’t know about. You managed to hit the nail upon the top and defined out the whole thing without having side effect , people could take a signal. Will probably be back to get more. Thanks

  27. I procure been surfing more than 5 hours online and not till the end of time summon up something interesting like your article. If more webloger will by symbols like you, internet will to wagerer village to encounter news than beside any unlooked-for before.

  28. Howdy I will be so happy I found your site, I really found anyone by problem, while My spouse and i was seeking on Aol for something different, Anyways I’m here currently and would much like to give you thanks for a new marvelous post and a all circular interesting website (I also love the theme/design), I lack time to undergo it all at this time but I have book-marked it plus included your current RSS rss feeds, so after i have time I am back to read much more, Please do maintain the fantastic job.

  29. I will need to have some ability connected with to give thanks everyone about the quality hints We’ve got frequently liked opportunities the webpage. We’re pumped up about the graduation directing to this sort of secondary education taking a look at additionally the entirety keeping feet wouldn’t tend to be completely finish without having along the way onto your site site. Plainly might be of a normal assist people, I’ll always be fortunate so that you can of what We have studied because of this point.

  30. Fantastic goods from an individual, man. Install Jacks » Somnangblogs We have understand your own stuff earlier than and you might be just also magnificent. I truly like what we have obtained here, certainly such as what you are stating and the way you state it. You allow it to be enjoyable and you also still look after to maintain it practical. I can not wait you just read far a lot more from an individual. This is actually a tremendous Install Plugins » Somnangblogs data.

  31. Some users are eager to watch funny videos, however I like to watch terrible videos on YouTube.

  32. What’s up, just wanted to tell you, I liked this blog post. It was helpful. Keep on posting!

  33. The next time I read a blog, Hopefully it won’t disappoint me as much as this one. I mean, I know it was my choice to read, nonetheless I truly thought you’d have something helpful to talk about. All I hear is a bunch of whining about something you can fix if you weren’t too busy looking for attention.

  34. belongings in my home but I can nearly accept a a smaller sized quantity chaotic life style if it had been something this kind of as putting on nominal athletic sneakers. From the aspect most of these neutral jogging shoes seem deceptively trim and clean, even so they offer you every little thing the toes need to have without further cushioning. The design allows you to harness your all-natural electricity and assortment of movement though jogging. The particular Adidas Adipure is made with stretching out linen to mix for your ft . too as employs tooled midsole-outsole development. These attributes be certain you could don the actual footwear right after a lengthy run and so they give a correct actually truly feel. My preferred element regarding minimal jogging footwear isn’t that that feels as though I will be with out shoes however the protection the particular sneaker offers. I like the impression nevertheless ‘m thankful for your hard comfortableness spherical outsole that lets me cruise all around pavement, observe, rubble plus more. I would not need to complete by getting barefeet, but it’s basic of these coaching footwear. Moreover they offer even more footing for nearly any terrain issue. They offer myself the additional increase to go even farther and likewise boost my very own working moments. I’ve located that these footwear is perfect for foot within a scenario. They are delicate upon painful toes and i could utilize them for any complete morning with out soreness. Which they help just about any foot despite no matter whether one particular generally calls for special footwear you aren’t just because they match to also as mimic the particular person’s foot. My spouse and i exercise a pair of occasions each week and they are generally hard sufficient to always result in the make your way with me. No matter whether I am planning for any stroll or probably instruction for a fresh conference, I normally use my dependable Adidas Adipure shoes or boots. They can be typically modest working footwear even so they present you with a biggest amount of electricity and luxurious. Wander operating is beginning to turn into at any time far more common along with novices will discover it much more intriguing too as beneficial when put next with normal highway operating. Trail jogging can provide powerful aerobic instruction that a majority of highway working can’t. The particular inclining along with climbing down from proficient in walk operating gives interval and also resistance education that may achieve systems in need of fat loss and likewise muscle mass mass tightening.

  35. few of sun glasses of their cars. When you’re driving a car your vehicle, your glare through light may cause non permanent blinding. Actually, glare from light really are a significant reason for several incidents. This type of accidents might be prevented in the event you have sun shades. In addition it’s required for professional owners, who generally push huge vehicles just like the institution vehicles or perhaps pickup trucks to utilize sunglasses contemplating that the effects

  36. Undeniably believe that which you stated. Your favorite reason appeared to be on the web the easiest thing to be aware of. I say to you, I certainly get annoyed while people consider worries that they just don’t know about. You managed to hit the nail upon the top and also defined out the whole thing without having side effect , people can take a signal. Will likely be back to get more. Thanks

  37. Hello, Neat post. There’s a problem along with your website in web explorer, might check this… IE nonetheless is the market leader and a huge section of people will leave out your fantastic writing because of this problem.

  38. There’s no doubt that, if they’re not towards the restricts, next lifetime could not cap one to take element within the boundary.

  39. Hello.This post was really interesting, especially because I was investigating for thoughts on this subject last week.

  40. I’d definitely like to guest post on your website.*,;.:

  41. Right after study a couple of with the content material for your website now, i genuinely as if your technique of blogging. I bookmarked it to my bookmark internet site list and are checking back soon. Pls have a look at my web site too and told me should you agree.

  42. Some truly nice and useful info on this website, too I think the layout has got excellent features.

  43. My wife and i were very thankful when Edward could do his survey through the ideas he gained out of the weblog. It’s not at all simplistic to just happen to be handing out procedures most people may have been trying to sell. We really keep in mind we have got you to thank for this. Those explanations you’ve made, the simple website navigation, the relationships your site aid to foster – it’s got everything fabulous, and it’s really leading our son in addition to us feel that that topic is exciting, which is very vital. Thank you for all!

  44. I believe that avoiding refined foods is a first step to lose weight. They may taste excellent, but prepared foods currently have very little vitamins and minerals, making you try to eat more only to have enough strength to get throughout the day. If you’re constantly ingesting these foods, switching to whole grains and other complex carbohydrates will assist you to have more power while eating less. Great blog post.

  45. Another thing I’ve noticed is the fact that for many people, poor credit is the consequence of circumstances beyond their control. For instance they may are already saddled by having an illness so that they have high bills going to collections. It would be due to a work loss and the inability to work. Sometimes divorce or separation can really send the budget in an opposite direction. Thanks sharing your ideas on this weblog.

  46. Spot on with this write-up, I honestly believe that this amazing site needs far more attention. I’ll probably be returning to read through more, thanks for the advice!

  47. Have you ever considered publishing an e-book or guest authoring on other websites? I have a blog centered on the same subjects you discuss and would really like to have you share some stories/information. I know my subscribers would appreciate your work. If you’re even remotely interested, feel free to send me an e-mail.

  48. I have learned some new things as a result of your website. One other thing I’d like to say is that newer laptop or computer operating systems tend to allow far more memory to be utilized, but they furthermore demand more storage simply to run. If people’s computer cannot handle much more memory as well as the newest program requires that memory space increase, it could be the time to shop for a new PC. Thanks

  49. Relax and no do the job!

  50. I have been browsing online more than 3 hours today, yet I never found any interesting article like yours. It’s pretty worth enough for me. In my opinion, if all web owners and bloggers made good content as you did, the net will be a lot more useful than ever before.

  51. There are some interesting closing dates on this article but I don’t know if I see all of them middle to heart. There’s some validity but I’ll take hold opinion until I look into it further. Good article , thanks and we would like extra! Added to FeedBurner as effectively

  52. I discovered your blog site on google and check a number of of your early posts. Proceed to keep up the superb operate. I simply additional up your RSS feed to my MSN News Reader. Seeking ahead to studying more from you later on!…more tips i found on!

  53. I believe similar web-site lovers would be wise to give some thought to this particular webpage as a model. Seriously clean and easy to use approach, coupled with brilliant website content! You’re a professional within this amazing issue :)

  54. very nice publish, i definitely love this web site, keep on itsome tips here!

  55. Your home is valueble for me. Thanks!…Useful info!

  56. You made some respectable points there. I appeared on the web for the problem and found most individuals will associate with with your website.more tips i found on!

  57. This actually answered my downside, thank you!some tips here!

  58. A powerful share, I just given this onto a colleague who was doing a little evaluation on this. And he in fact purchased me breakfast as a result of I discovered it for him.. smile. So let me reword that: Thnx for the treat! But yeah Thnkx for spending the time to debate this, I feel strongly about it and love studying extra on this topic. If potential, as you develop into expertise, would you mind updating your weblog with more details? It is highly useful for me. Huge thumb up for this weblog post!more tips i found on!

  59. There are certainly a whole lot of details like that to take into consideration. That is a great point to deliver up. I supply the ideas above as common inspiration but clearly there are questions like the one you carry up the place a very powerful factor will probably be working in trustworthy good faith. I don?t know if finest practices have emerged round things like that, however I am certain that your job is clearly recognized as a good game. Both girls and boys really feel the impact of only a second’s pleasure, for the rest of their lives.

  60. Your home is valueble for me. Thanks!…some tips here!

  61. Once I initially commented I clicked the -Notify me when new feedback are added- checkbox and now every time a comment is added I get 4 emails with the identical comment. Is there any means you’ll be able to take away me from that service? Thanks!

  62. There is noticeably a bundle to learn about this. I assume you made certain good factors in options also.some tips here!

  63. WONDERFUL Post.thanks for share..more wait .. …some tips here!

  64. It’s exhausting to find educated people on this subject, however you sound like you understand what you’re talking about! Thanks

  65. Dude.. I am not substantially into studying, but somehow I obtained to go through lots of guide subject material using your website web-site. Its awesome how attention-grabbing it definitely is for me to go to you pretty frequently.

  66. I am actually glad to read this web site posts which consists of lots of valuable facts,
    thanks for providing these kinds of data.

  67. not whenever they truly dont care about our survival which they dont it could be bullshit but when it does occur it would likely be somthing such as this . NO warning whatsoever, thats why all of the big governments of your environment are actually constructing underground bunkers which probably want work anyway

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>