Wendell Wallach to Give Keynote on AI Morality at WFS Meeting

Wendell Wallach will be giving the keynote talk at the plenary session of the World Future Society Conference in Boston on July 8th. The title of the talk will be, Navigating the Future: Moral Machines, Techno Humans, and the Singularity. Other speakers at WorldFuture 2010: Sustainable Futures, Strategies, and Technologies will be Ray Kurzweil, Dennis Bushnell, and Harvey Cox.

Wallach will also be making a splash in an upcoming issue of Ethics and Information Technology dedicated to “Robot Ethics and Human Ethics.” As the Moral Machines blog, Wendell offers the first two paragraphs of his editorial, and some additional information about the issue:

It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At bottom is what we must know about ethics in general to build machines that operate within normative parameters. Fuzzy intuitions will not do where the specifics of engineering and computational clarity are required. So, machine ethicists are forced head on to engage in moral philosophy. Their effort, of course, hangs on a careful analysis of ethical theories, the role of affect in making moral decisions, relationships between agents and patients, and so forth, including the specifics of any concrete case. But there is more here to the human story.

Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.

“Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making”
Wendell Wallach

“Moral Appearances: Emotions, Robots and Human Morality”
Mark Coeckelbergh

“Robot Rights? Toward a Social-Relational Justification of Moral Consideration”
Mark Coekckelbergh

“RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield”
John Sullins

“The Cubical Warrior: The Marionette of Digitized Warfare”
Lamber Royakkers

“Robot Caregivers: Harbingers of Expanded Freedom for All”
Yvette Pearson and Jason Borenstein

“Implications and Consequences of Robots with Biological Brains”
Kevin Warwick

“Designing a Machine for Learning and the Ethics of Robotics: the N-Reasons Platform”
Peter Danielson

Book Reviews of Wallach and Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford, 2009.
Anthony F. Beavers
Vincent Wiegel
Jeff Buechner

Bravo! Wallach provocatively goes after the heart of the moral issue. Moral philosophy needs machine ethics to test its descriptive theories of human morality and morality in general. Philosophy without engineering and the scientific method is fatally limited.

This has an impact on morality that concerns all human beings. For millennia we have understood morality and ethics through introspection, contemplation, and meditation — but all of these avenues are ultimately limiting without cognitive experiments to back them up, which requires AI. Because we lacked the technology to conduct these experiments throughout history, there became a demand for objective moral codes, often backed by a claimed divine authority. The problem is that all of these “objective moral codes” are based on language, which is fuzzy and can be interpreted in many different ways. The morals and laws of the future will be based on finer-grain physical descriptions and game theory, not abstract words. We cannot perfectly articulate our own moralities because neuroscience needs to progress to the point where we describe our moral behavior more deterministically, in terms of neural activation patterns or maybe something even more fundamental.

Critics might say, “it is your need to formalize ethics as a code that makes you all so uncool”. Well, too bad. The motivation here foremost is knowledge, secondarily is the issue that if we don’t formalize an ethics, someone else will formalize it for us, and put it into a powerful artificial intelligence that we can’t control. We cannot avoid formalizing ethics for machines, and thereby make provocative and potentially controversial statements about human morality in general, because artificial intelligence’s long-term growth is unstoppable, barring some civilization-wide catastrophe. Humanity needs to come to terms with the fact that we will not be the most powerful beings on the planet forever, and we need to engineer a responsible transition, instead of being in denial about it.

Promoting machine ethics as a field is challenging because much of the bedrock of shared cultural intuitions regarding morality say that morality is something that can be felt, not analyzed. But cognitive psychologists prove every day that morality can indeed be analyzed and experimented with, often with surprising results. When will the rest of humanity catch up with them, and adopt a scientific view of morality, rather than clinging to an obsolete mystical view?

Comments

  1. Max

    Excellent post. But humanity will never “catch up”. Not the present one at least, evolutionary psychology is regarded with disdain by all sorts of “progressive and liberal” society already.

    One need not wait for the rest of humanity though. In fact it would be for the best if the progress in the field is fast enough that the rest of society does not have time to react. For the reason there wont be time for regulatory and bureaucratic barriers to appeal

    There are already disturbing voices calling for regulation to death of everything (from stem cells and biotech to AI). Best for the field to progress is to keep everything under the radar of general public

  2. Mike

    Max is right in regards to regulation in the US. Sports and celebrity gossip are useful in keeping the electorate preoccupied and unorganized. Who was it that said “We will be gods or we will have war.” ?

  3. G-man

    We may be witnessing a form of the amoral AI right now with the BP disaster: a huge, unaccountable entity which controls access to the sea, sky, and ocean bed proximate to the rig, plus all the beaches where the oil is making landfall, even going so far as to prevent photography and even access to those areas by citizens and the government.

    We all naturally feel powerless about that, so now, take it about a million times farther, much more quickly, and that might be what we could encounter with an unfriendly AI in the not-so-distant future…

Trackbacks for this post

  1. Is there anyway to get the ipad apps in Canada?

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>