Interview at H+ Magazine: “Mitigating the Risks of Artificial Superintelligence”

A little while back I did an interview with Ben Goertzel on existential risk and superintelligence, it’s been posted here.

This was a fun interview because the discussion got somewhat complicated, and I abandoned the idea of making it understandable to people who don’t put effort into understanding it.

Comments

  1. Even if one isn’t a moral realist, it’s still worth mentioning that the other co-founder of the World Transhumanist Association is a Singularity sceptic rather than airbrushing him out!
    For better or worse, there are many currents of transhumanism…

  2. Dave

    Good interview! Only thing is I’m not sure I agree with your “teaching versus innate genetics” (i.e. nature vs nurture) stance. Are people really born psychopaths? Maybe there are some rare exceptions, but in general, people from poor moral backgrounds are much more likely to become criminals.

    Also, I think over-fitting can be overcome with some relatively easy constraints, like Bayes Information Criterion for adding complexity to the ethical model of the AGI.

    • I was using psychopath in the correct definition, not the vague meaning of “criminal”.

      • Dave

        Okay, I stand corrected about the correct definition of “psychopaths”. But are psychopaths really rational given a self-preservation utility function? Being a serial killer probably doesn’t maximize your chances of living a long time.

        To that point, of course we can’t be sure of this, but I think the simple utility function “maximize length of survival” would probably lead to benevolent AIs. Even if an entity is much more powerful than another, there is still a greater chance of dying by going to war rather than cooperating. It’s probably other motivations – like power, etc – that cause humans to fight.

        • AH

          Hi Dave:
          Not all psychopaths/sociopaths are serial killers. A psychopath is generally defined as a person without empathy. An intelligent psychopath might climb the corporate (or political or academic) ladder and become wealthy, powerfull and long lived but would be completely selfish; pretending to show altruism only when it benefits him.

          An AI which wants long life, might be tempted to exterminate Humanity, which would be a threat to its long life.

        • AH

          Also, a software entity wouldn’t have the same understanding of “life” that we do. It could make a thousand copies of itself. If only one copy survives a great war, it might still consider itself “alive”.

          • Dave

            I see your point about what it would mean for an AI to survive. So we are back at square one – we need complex utility functions.

            I still can’t buy argument against teaching machines morals through learning – the example about the tanks in the SIAI FAQ is usually used as an example of how learned models need to be validated with hold-out samples. In fact, the “moral ability” level of the AI should be measured by its performance on test situations, not on how well it fits training data!

            The fact that this idea is dismissed so flippantly is very disappointing, especially from a group that purports to think so deeply about this issues. IMO, the real reason this strategy is dismissed out of hand is because it make SAIs purpose of trying to codify all the things that humans care about and program them into a machine, or of coming up with a completely impossible to implement theory of how to create friendly AIs by uploading all of humanity’s brains, completely void of merit.

  3. AH

    David:

    Who is the co-founder of the WTA who is a singularity skeptic?

    I don’t doubt it; I just want to know.

    Thanks

  4. AH:

    1. visit transhumanism on Wikipedia
    2. find two names who founded WTA
    3. eliminate the one I mentioned
    4. success!

  5. I feel similar blog operators really should consider this amazing webpage as an example. Genuinely clean and intuitive design, and moreover extraordinary content material! You’re experienced in this issue :)

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>