Wendell Wallach will be giving the keynote talk at the plenary session of the World Future Society Conference in Boston on July 8th. The title of the talk will be, Navigating the Future: Moral Machines, Techno Humans, and the Singularity. Other speakers at WorldFuture 2010: Sustainable Futures, Strategies, and Technologies will be Ray Kurzweil, Dennis Bushnell, and Harvey Cox.
Wallach will also be making a splash in an upcoming issue of Ethics and Information Technology dedicated to “Robot Ethics and Human Ethics.” As the Moral Machines blog, Wendell offers the first two paragraphs of his editorial, and some additional information about the issue:
It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At bottom is what we must know about ethics in general to build machines that operate within normative parameters. Fuzzy intuitions will not do where the specifics of engineering and computational clarity are required. So, machine ethicists are forced head on to engage in moral philosophy. Their effort, of course, hangs on a careful analysis of ethical theories, the role of affect in making moral decisions, relationships between agents and patients, and so forth, including the specifics of any concrete case. But there is more here to the human story.
Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.
“Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making”
“Moral Appearances: Emotions, Robots and Human Morality”
“Robot Rights? Toward a Social-Relational Justification of Moral Consideration”
“RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield”
“The Cubical Warrior: The Marionette of Digitized Warfare”
“Robot Caregivers: Harbingers of Expanded Freedom for All”
Yvette Pearson and Jason Borenstein
“Implications and Consequences of Robots with Biological Brains”
“Designing a Machine for Learning and the Ethics of Robotics: the N-Reasons Platform”
Book Reviews of Wallach and Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford, 2009.
Anthony F. Beavers
Bravo! Wallach provocatively goes after the heart of the moral issue. Moral philosophy needs machine ethics to test its descriptive theories of human morality and morality in general. Philosophy without engineering and the scientific method is fatally limited.
This has an impact on morality that concerns all human beings. For millennia we have understood morality and ethics through introspection, contemplation, and meditation — but all of these avenues are ultimately limiting without cognitive experiments to back them up, which requires AI. Because we lacked the technology to conduct these experiments throughout history, there became a demand for objective moral codes, often backed by a claimed divine authority. The problem is that all of these “objective moral codes” are based on language, which is fuzzy and can be interpreted in many different ways. The morals and laws of the future will be based on finer-grain physical descriptions and game theory, not abstract words. We cannot perfectly articulate our own moralities because neuroscience needs to progress to the point where we describe our moral behavior more deterministically, in terms of neural activation patterns or maybe something even more fundamental.
Critics might say, “it is your need to formalize ethics as a code that makes you all so uncool”. Well, too bad. The motivation here foremost is knowledge, secondarily is the issue that if we don’t formalize an ethics, someone else will formalize it for us, and put it into a powerful artificial intelligence that we can’t control. We cannot avoid formalizing ethics for machines, and thereby make provocative and potentially controversial statements about human morality in general, because artificial intelligence’s long-term growth is unstoppable, barring some civilization-wide catastrophe. Humanity needs to come to terms with the fact that we will not be the most powerful beings on the planet forever, and we need to engineer a responsible transition, instead of being in denial about it.
Promoting machine ethics as a field is challenging because much of the bedrock of shared cultural intuitions regarding morality say that morality is something that can be felt, not analyzed. But cognitive psychologists prove every day that morality can indeed be analyzed and experimented with, often with surprising results. When will the rest of humanity catch up with them, and adopt a scientific view of morality, rather than clinging to an obsolete mystical view?