I’m in Melbourne Beach, Florida today, covering the 3rd Annual Terasem Colloquium on the Law of Transbeman Persons. There are about 20 of us in a classroom-style arrangement, with a nice large flatscreen at the front of the room for presentations. This colloquium is being thoroughly recorded in every possible way: there is a filming team (the same company that covers Space Shuttle launches), a time-delayed streaming video, professional stenographer, and call-in listening available. (Callers within the continental U.S. and Canada may dial 1-877-879-6207; other countries: (00+1) 1-719-325-4775.) This post will be regularly updated as new information comes in. The program is available here. Lots of condensed brainpower in this room — Marvin Minsky, Max More, Mike Perry (from Alcor), Justin Lowe (from ImmInst), and quite a few Ph.Ds I was fortunate to meet with at the reception the previous night before.
Dr. Martine Rothblatt is opening up the day with her welcoming announcements. She’s reviewing the history of various law colloquia. For instance, in 1958 was the first colloqiuim on the law of outer space. Back then, the space colloquia might be considered similarly odd to the colloquia we are conducting today. At that time, the only thing had ever been launched was Sputnik — a grapefruit-sized satellite.
Today, there are already quite a few examples of early forays into the transhuman, more than there were forays into space into 1958. Examples: ASIMO, organ transplantations are commonplace (75,000+ so far), over 100 people are cryonically suspended, a dog kidney has been successfully vitrified and transplanted. Brain-computer interfaces demonstrate the first steps towards complete mind uploading. We have caps that serve as input devices just by measuring blood flow in the brain.
When legal efforts began into space was 1958. Now, 50 years later, almost into 2008, we have the first legal efforts to look into the law of transbeman persons. In the case of space, the goal was to avoid conflicts among states, with transbeman law, the goal is to avoid conflicts among persons. We are on the verge of creating non-DNA or only partially DNA-based persons. The need to avoid conflict among persons is even stronger today that the need to avoid conflict over space fifty years ago.
Many of the questions revolve around boundaries. In 1958, it was concluded that the inherent dynamics of spaceflight made it that traditional concepts of sovereignty had to be abandoned. Yet, someone had to be legally responsible for every object placed into space — registration, coordination, and strict liability for damages. People sometimes argue, “why have laws if they can be broken?” Well, even if they’re broken 10%, 20% of the time, it’s better than having nothing.
In the context of transbeman law, what can we conclude? The age-old concepts of citizenship and death may have to give way to new techno-logical realities, in the same way that happened with space. But also, responsibility for transbeman persons need to be regularized. Many issues: rights and obligations, aplication of laws, personhood, citizenship, parenting, etc. (Funny slide here with one copy of Data from Star Trek threatening another copy.)
Obviously, law has to evolve with technology. The same way that Copernicus’ discovery o fthe rotation of the Earth limited the amount of time that old-school sovereignty concepts could continue, Turing’s theory of machine consciousness was the beginning of the end of traditional concepts of human-only citizenship.
10, 20, or 50 years, what will happen? Perhaps a transition to an information-theory based concept of personhood. For instance, 30 years ago “dead” used to mean a stopped heart, then it changed to mean the cessation of electrical activity in the brain. At the time, this was radical. AI citizenship is on the horizon.
Terasem: mission to educate the public to great extend our lives, in diversity, unity & joy via geoethical nanotech and personal cyberconsciousness, through conferences, websites journals, radio, film, research, and especially practical demonstrations. (Martine introduces all the staff of the Terasem Movement, Inc. I was fortunate to sit with many of them at dinner last night and had some nice conversations about CybBeRev.)
Martine welcomes Marvin Minsky — father of AI, prolific inventor, so many inventions that each time we look at the list, a new one surprises us. Software for confocal scanning microscopes — these are central to all work in biotech.
(Professor Minsky now takes the podium.)
Prof. Marvin Lee Minsky, Ph.D.
MIT â€“ “Father of Artificial Intelligence”
“The Emotion Machine”
Minsky shows us a list of problems that we’ll have to deal with soon. Global warming, biodiversity, energy, terrorism, fundamentalism, migration and cultural wars, health costs and epidemics, new viruses and nano things, planetary impactors, global internet attacks, etc. Another huge problem will be caused by life extension and the drive towards (ultimately) immortality. This is causing an acute labor shortage, and forcing many countries to import labor from overseas. Clement — the French centenarian, lived to 122. In an interview she revealed that she met Van Gogh, but couldn’t stand to be in his presence, because he had such awful breath! Apparently the biographers left this out in their official accounts… anyway, as Aubrey de Grey and others have argued, it seems like lifespan will be lengthening to the point our lives are extended faster than they’re running out. Anyway, our most urgent problem may be a Labor Crunch.
“SAINT” – symbolic automatic integrator. Siagle 1961 — an automated program for determining integrals. The program took about 15 minutes to get an integral, about the same as what it would have taken an MIT student at the time to figure it out. The computer even made us feel as it was lifelike, because it so slow.
(Shows a slide similar to an IQ test question, about analogies between geometric symbols. In 1964 — Evans’ geometric analogy program (MIT). Today, we don’t see programs like this. Physics envy — AI designers are always looking for a unified theory. Genetic programming took off, and these had useful applications in some niches: for instance, optical character recognition and the design of computer chips. Today, we have tens of thousands of programs based on 10 or 12 fads. None of these fads lead us to commonsense reasoning.
Many computer programs surpass human performing at specialized jobs. Ray Kurzweil’s FatKat program is supposed to perform better than brokers at predicting changes in the stock market.
But there is little progress in robotics, for instance. No robot can simply put a pillow in a pillowcase. In AI, there is no program that can read a simple story, like a fairy tale, and summarize what it means.
Examples of commonsense knowledge: you can use a string to pull, but not push. If you break something, you have to pay for it. People usually go indoors when it rains. And so on. Every child knows about 20 million or more similar statements. Cyc has 3-4 million such statements. MIT OpenMind is another project in this direction.
Commonsense thinking (shows a slide of his grandchildren playing with blocks). Physical: what if I pull out that bottom block? Social: should I help with his tower or knock it down. Any child thinks about the situation from many different perspectives: emotional, mental, bodily, visual, tactile, spatial. (Dr. Minsky goes into a small sidenote on children’s toys here: why are legos so popular when you can’t even create triangles with them? They’re so rigid and inflexible… I recommend simple blocks or Tinkertoys. I can empathize with this.)
So, why are AIs still so specialized? Most AI researchers have tried to invent some single technique that could extend to solve all types of problem. So our field split into specialties: reinforcement learning, rule-based systems, neural networks, statistical inference, formal logic, genetic programs, “baby machines”. Each method works only in certain domains. We need to know much more about where each method works and why. Today, statistics making use of conditional probabilities is most popular.
Genetics can only build creatures that can solve a few serious problems. When you tell a child a fairy tale, they learns tens of thousands of examples where bad things happen to people. Genetics cannot accumulate huge knowledge bases. You can billions of nucleotides in your genome: if that were a real database, why would you even need to go to school? I haven’t seen anyone else look at how evolution is so limited in this way.
To make a smart machine, you need to give it different ways to think instead of just one. The human brain has hundreds of different centers, not just one. Even if you knew exactly how neurons work, you wouldn’t necessarily be very far along. There is a lack of communication between people and AI and neuroscientists.
Think about a chair. There are so many different ways to represent it. As something that is bought, as something that makes people comfortable. In my book, I call this panalogy. If you only understand something in one way, you get stuck. If you have multiple ways, then when one interpretation doesn’t work, you can quickly start thinking about it in another way.
Suppose we agree that what makes humans intelligence is that when we get stuck thinking in one way, we can quickly (in less than a second) switch to a new way of looking at it. But there are further questions. How do you know when to switch? What do you switch to? And so on. Here is a sequence I propose in my recent book, The Emotion Machine:
If a problem seems familiar, try reasoning by analogy.
If it seems unfamiliar, change how you’re describing it.
If it still seems too difficult, divide it into several parts.
If a problem is too complex, replace it by a simpler one.
If your methods do not work, ask someone else for help.
I named my book “Emotion Machine” as a way of getting people to read it… unfortunately this didn’t work. Anyway:
Old view of emotions:
most emotions add features to thoughts, the way an artist adds colors to black-and-white drawings. We grew up in a culture teaching us that emotions are additional mysterious features of thoughts. I am arguing the opposite.
New view of emotions:
Emotions actually serve to suppress aspects of thought rather than augment them. For instance, love can cause us to suspend our critical faculties.
(Minsky’s wife reads aloud a page from the Emotion Machine that is relevant here. I always liked this part, and you can see it online here.)
What is a “way to think”? Think of a mind as a cloud of resources. Each mental state activates some different set of mental resources. Anger, hunger, fear, thirst. These selectively activate or deactivate sets of mental resources. I say that the “low” level thinking and “higher” thinking works the same way. Splitting a job into parts, making an analogy, etc.
In the end, I conclude this works on six levels, in order from “superego” to “id”: self-conscious emotions, self-reflective thinking, reflective thinking, deliberate thinking, learned reactions, instinctive reactions. Psychology got stuck due to physics envy. People are looking for simple principles, but we know that the brain is really complex.
So that’s the idea. I’m trying to get a group of researchers together to implement this theory. I’m trying to get funding together for it at this point. If we go in this direction, it’ll take 5-6 years to get a working prototype. If this architecture works, perhaps we can create some sort of collaborative system where many people can contribute details.
My fear is that AI won’t take off until we get some architecture, like this one, that others can contribute to it. There is currently no high-level architecture for developing different methods. If humans survive the next half-century, we’ll see systems who are arguably deserve the same rights as humans. Unfortunately, the first 200 or so versions could be so buggy that you shouldn’t give them rights… in software we know that this is how things work.
David R. Koepsell, J.D., Ph.D
Yale Interdisciplinary Center for Bioethics
New Haven, CT
“The Legal Ontology of Persons: the Transbeman Example” (ppt)
I’m a philosopher. Let me clarify the way I use the word “ontology”: the generalizing from experience so we can build robust vocabularies that represent reality. Google “basic formal ontology” for more info.
My overall thesis today: there are numerous existing ontologies, and all of them have inbuilt assumptions about the nature of personhood that we need to look at more closely.
Here’s one example of an ontology. The Gene Ontology: molecular function, cellular component. When this is complicated, this should give us an accurate picture of the organism’s phenotype from its genotype.
We represent the world at different levels of granularity, depending on the ontology at hand. Simple biomedical ontologies aren’t going to give us a good answer to the question “what is a person?” There is no good work in ontology being done on the question. For instance, the social object “person” is not, and perhaps cannot be, contemplated by the Gene Ontology, even in its completed form. We need a separate ontology of “persons”.
Minsky asks: if Islam a person. Koepsell: no, it lacks cognition and other things we see as central to a person.
Emergent properties of personhood are ontologically related by not dependent on the gene ontology. Bearing rights, intending owing duties, etc. These are different than genes and their expression.
To bridge the gap between biomedical ontologies and important emerging problems, we have to start a research project. There are many practical social and legal issues arising out the potential answers.
Persons are the legally, socially, and culturally relevant level of granularity in social ontologies (such as legal ontologies). There is plenty of preexisting data on this — we need to mine it and find relations. There are already some legally and socially relevant categories known to be encompassed by the genome. Some genetic disease produce persistent and legally relevant mental states… e.g aph1b and schizophrenia.
Biomedical and legal/social ontologies should communicate more . Obviously, medial classifications matter in legal ontologies. For instance, criminal liability only attaches to sane, competent adults.
Blastocysts, fetuses, rights-bearing persons and a fresh human corpse are all legally and socially distinct even though they may be biomechanically identical.
Objections/problems: isn’t this just science? Well, no. It doesn’t always express the relationships in ways useful across sciences. Some people use the same words in different ways, which limits communication between fields. If we establish common ontologies, it can help us address bioethical challenges.
Another objection: isn’t this terribly complicated. Yes. No one said it would be easy. But current social ontologies all hinge on some recognition of the entity “person”. In a lot of my work, I start from legal precedents. The law has had to grapple with these issues in a really practical way, long before philosophers look at them.
We are developing necessary conditions for personhood, but we must define sufficient conditions separate from existing ontologies. Biology alone is overbroad and insufficient in defining legally and socially relevant category of “person”. Now is an auspicious time to be addressing the question.
Linda MacDonald-Glenn: to David, are you looking for a baseline of negative minimum liberties?
Dr. Koepsell: before we look at questions of liberty and rights, we just want to ask what a person is. A pre-ethical standpoint. We have to ask what the object in question is.
(Break. We go upstairs and have fruits and coffee while admiring the Atlantic.)
William Sims Bainbridge, Ph.D
National Science Foundation
“The Rights of an Avatar” (ppt)
Here’s one of personalities: Max Rone — top level priest of Holy Light, part of the Alliance, maxed out in herbalism and alchemy, Winged Ascension Guild. In July, I had a cover article in Science magazine where I talked about how people would use virtual worlds for research purposes.
Dr. Bainbridge shows a slid with two avatars on the screen at once. They’re both his, so he has more than one identity. The idea of an avatar is really old — look at Zeus appearing to Europa as a bull. Do kids playing Mario think of themselves as a fat plumber? Probably not in totality, but in some. Lunette, Preistess of Elune is one of my Warcraft characters. I partially identify with her… it’s not a problem that she’s a different gender.
How independent are avatars from their creator? Obviously the rights aren’t the same. When people erase avatars of themslves, is that suicide, homocide, or infocide? The rights of avatar don’t depend on just laws and ethics, but all sorts of other constraints.
An avatar is a manifestation of self, but belongs to a social system. (Great slides here from WoW, you should definitely download the ppt.) For instance, in this auction house in WoW, we have various AIs along with avatars. I’ve taken over 15,000 screenshots in WoW… it’s annoying because there’s no way to search them all.
In a WoW dungeon, a raid, only five people can visit the dungeon. There are various levels of organization: there’s the party, the guild, then the greater Alliance. As a priest, I heal people as their health gets low. We help each other. A network of obligations make possible.
Rights are relative. Rights are social constructions, so they are relative to the social system and the status of the individual in that system. Different MMORPGs have different rules: Ultima Online Everquest 2, Star Wars Galaxies, etc. These cause different social systems to emerge.
In many MMORPGs, there are rules that restrict people. You can get kicked out for using your real name, especially when kids are involved. Prohibited: profanities, etc. But what if I’m naturally a profane person? That aspect of my personality doesn’t get included. In Habbo Hotel, you can’t take part in sexual acts with other Habbos, abuse or bully other Habbos, etc.
Migration rights. It’s not currently possible for one avatar from one world, say SecondLife, to go into World of Warcraft. Active Worlds & Entropia Universe try to create standards for diverse worlds, serving as platforms. Usually worlds are “sharded”, that is, they’re divided up. In WoW, you can move between realms for a fee. In SecondLife, the number of avatars you can have in one area is limited.
Property rights in SecondLife. You can buy and sell virtual objects using Linden dollars. The transaction rate is about L$250 per USD. In WoW, you’re not supposed to be able to trade dollars for in-game currency.
Right to life = data security? Sometimes there are glitches, and data gets lost. One character, various attributes got lost, and I had to go back and reassign them. In WoW, there is an “armory” that present summary data for all the millions of characters that are level 10 or above. This is publicly available, so there’s no privacy. WoW generates vast amounts of data, but not all of it is availble.
I recently conducted a $1 million project. The goal was to determine the way economic systems in EverQuest 2 worked. We had all the data from the game, based on detailed data from servers and a questionnaire completed by 10,000 players. It’s hard to get people to fill out questionnaires, but if you reward people with virtual objects, surprisingly, there is a nearly 100% response rate. And that virtual object is free. So we got quite a few responses.
Because EverQuest 2 wasn’t intended to archive everything about people, I had to try and use the existing data as best as possible. My goal is to engage in personality capture.
In WoW, a programming language called Lua is used. There are open source software programs that allow you to query large amounts of variables in entire WoW realms. I’m a member of many different realms just so I can get that data out. I believe that virtual worlds are great for getting personality capture information. I still believe in questionnaires, sure. Virtual worlds are a great place for building early, low-resolution models of human personalities.
What do I believe avatars deserve? A right to joy!
Let me end with this quote:
“Transcendence is a problem of translation: First we must learn the language”.
Now, questions and comments.
Linda MacDonald-Glenn: the American Bar Association recently did a seminar on online issues, one of the questions that came up was, “do avatars have legal rights?” Someone there said she was coming out with a paper on the legal rights of avatars.
Bainbridge: the computer science research in this area is quite spotty. In crime, for instance, they look at some of the issues in more detail. There is a lot of room for review articles that are both empirical and conceptual.
Max More: any situations where agreements in virtual worlds have led to legally binding agreements on the outside?
Bainbridge: many of the guilds I know in WoW actually know each other in the real world. So they do have that connection. It seems that the US federal government has banned gambling in SecondLife. Many virtual worlds have an incentive to forbid interchangeability of currencies to avoid the possibility of taxation. I see technical changes in WoW to block Chinese gold farmers from doing their thing.
Sebastian Sethe: is there a moral difference when a character does and a person does?
Bainbridge: I just came back from a conference on computational social science put on by Harvard and MIT. If I want education in ethics, I would play WoW. The game has a very strong allegorical message about trust and the consequences for breaking it. One of my favorite quests in WoW is to go to the Scarred Vale in Stone Talon Mt., where the Venture Company is chopping down trees. You know what you get to do? Chop them down! The religion of the holy light has three tenets: respect, tenacity, compassion.
Linda MacDonald-Glenn, J.D., LLM
Alden March Bioethics Institute at Albany Medical Center and University of Vermontâ€™s College of Nursing and Health Sciences, Burlington, VT
“The Tao of Personhood: the Yin and Yang of Property-Person Continuum” (ppt)
It’s important to bring a cross-cultural perspective to these issues. It’s traditionally been so dominated by the Western views, so I’ve studied things like the Tao.
Three cases in the not-too-distant future where traditional notions of personhood are being challenges: chimpanzees with vocal cords, artificial wombs, and cyborg soldiers created with spare parts.
One example of expanding personhood today: we’ve come to recognize that pets aren’t mere objects, or pieces of property. There six states that recognize pets are not just property. (Dr. MacDonald Glenn asks everyone in the audience who has pets, practically everybody raises their hands. How many of them consider them members of the family? The hands stay up.) Laws are evolving to reflect these views.
In Colorado, there’s a measure on the ballot on whether or not embryos deserve personhood. It might not happen, but you see a push in that direction.
Divergent paths of intelligence- hardware, software, human. Three overlapping areas of evolution: sentient machines (AI), disembodied entities (avatars), human technogenics (cyborgs). Anticipated NBIC payoffs: improved effciency, sensory and cognitive capabilities, revolution in healthcare, nanotechnology-based implants.
The military is making efforts towards enhancing human intelligence. This could be to keep up with AI. Safe AGI: perhaps all AGI should be driven by mammal-origin brains. I don’t know enough about this… it’s a really fascinating area. If pleasure and pain are the basis of empathy, then we’d want AGI to have it.
Historically, nonhuman animals were considered property. Slaves, women, and children. Yet, nonhuman entities such as corporations and ships have been recognized and given rights as “persons”. Current legal spectrum: one proposed definition of human being means “any entity possessing one or more of the higher faculties, such as the ability to reason, demonstration of awareness of self”.
Some legal issues in converging tech we are likely to face over the next 25 years: privacy, confidentality, informed consent for artificial research subjects, augmentation, emotions, competence, autonomy, and the law, capability and culpability, synthetic humans — persons or property, issues of identity, a new lexicon to describe new and complex relationships, issues of justice and equity.
Are them some basic criteria for personhood? Maybe all living things, or consciousness, sentience (Tom Regan, Peter Thinger), self-awareness, rationality? Joseph Fletcher’s 15 propositions for personhood. These are problematic, though, because we all know people who may lack one of these… concern for others, for instance!
Robot rights: which minds have which rights and responsibilities? Engineering slave minds vs. flourishing minds. What about sex and marriage with robots? What have the courts said? Various interesting cases. In Toy Biz, Inc. v. United States, there was a dispute whether or not certain action figures represented humans (they would have needed to pay tariffs if so). Toy Biz won because the action figures had monster-like features.
In another case, the Jack-O-Lantern case, US Supreme Court 1922, about a rebuilt ship. When the vessel was entirely rebuilt, did it change its identity? It was decided that despite extensive repairs, identity remained the same. This begs the question: how much of yourself can you replace and still retain your identity? This case seems to indicate: a lot.
I have proposed a paradigm for a property-personhood continuum (see slide). This is just a proposal: I’m open for feedback. This chart includes: property, androids, quasi-property (chimeric humanoids), fetuses and embryos ex utero, the cognitively impaired, cyborgs, etc. as different forms of chimeras and cyborgs are created in the technical environments, the courts will be the ones to determine where these creations fall on the continuum of personhood.
Gabriel Rothblatt: You can change a limb or whatever, you’re still a person. How much do you need to change of your function to not be the same person?
MacDonald-Glenn: It’s a great philosophical question of our age. I don’t have the answer.
Max More: In philosophy, there’s a thought model called the Ship of Theseus, the twist was that the parts being replaced were used to build another ship. So the Jack-O-Lantern ruling might not be a good guide in cases like that.
MacDonald-Glenn: In the Jack-O-Lantern case, it had to do with insurance. The original identity was retained so there was no payout. Looking at it from another angle, there could be another conclusion.
Marvin Minsky: it also raises the question of whether identity is even useful anymore. Are you the same person in five minutes as you are now? Once we get the ability to make new brains, what happens? Maybe science fiction authors have even thought about this more than most philosophers.
Now we move from Track A (Legal Definitions of Transbeman Persons) to Track B (The Legal Status of Mind Transplants Via Mind Uploading). And the first speaker is…
Max More, Ph.D
Co-founder, Extropy Institute
Futurist, Strategic Philosopher
â€œA Proactive-Pragmatic Approach to the Legal Status of Cybermindsâ€
A transbeman person: a being who claims to have the rights and obligations associated with being human, but is beyond acceted notions of legal personhood. This is different than “transhuman” because this term implies human-derived, and transbeman may not. Includes: cryosuspension revivee, self-aware AI, uploads, duplicates, teleported people. Also, “partials” — limited expressions of your personality, cyber-offspring, etc.
Bioconservative view: say you’re reconstructed from mindfiles, or revived from freezing — you may be stripped of all assets, citizenship, rights, and status, or even be shipped off to a reservation or concentration camp. Maybe you get hunted down and “retired”.
The legal status of cyberminds should demand on possession of enough of the elements of personhood. A person is an independent center of activity who identifies with those capacities, physical and mental to which he has direct access. Personal identity = individualized elements of personhood. We talk about “humans” but we should talk about “persons” instead.
Central identity traits: may vary depend on the being. A trait may be central to a person’s identity depending on: the extent to which other traits depend on it, the degree of contetual or regional effects, the degree of which it is difficult to change, the degree of social effects, which dominates when others are in conflict. The degree to which it is appropriated as important in that the person regards herself as radically changed if the trait is modified.
Elements of identity (beme-complexes): memories, desires, dispositions, psychological traits, social role identity, ideal identity/values, projects. George Dvorsky has a paper: “Martine’s Mindfiles”, in which he critiqued Martine’s ideas on mindfiles. I think it over-relies on the importance of memories.
Informational continuity is what matters, not structural or even functional continuity. For self-reconstruction, this implies we can improve our odds of long-term survival by preserving sufficient self-data (bemes) to reconstruct those elements of self. For instance, what if I got one of those blood-pumping turbines to replace my heart? There’d be no pulse! And some think that’s what defines a person. Examples of projects to retain bemes: MyLifeBits, Lifenaut. My summary of the idea behind all these: “beme me up, Scotty!”
I propose a contractarian foundation for determining who gets rights. Traditionally, it’s religious (if you have a soul, you get rights) or Cartesian (if you have a mind, you get rights). The best approach is something that requires no metaphysical assumptions. Contractarianism: what two people would contract to do in relation to one another, when their contract is made under conditions of prefect mutuality (no one dominates or threatens the other person), would be morally acceptable to each of them. We should base legal status on the personhood criterion and the psychological continuity view of personal identity will be strongly morally justified if persons pursuing mutual advantage in a fair bargain would agree to it.
Contractarian history: Thomas Hobbes, John Locke, David Hume, John Rawls, David Gauthier (Morals by Agreement), Jan Narveson.
Would the Turing Test be a good test for personhood? No, as it’s too behaviorist. There are some humans who might actually fail the Turing Test. Maybe they have a narrow set of interests, say. So the Turing Test fails here. It also puts too much emphasis on role identity.
Application: the right to life of a person in suspension (or uploaded/reconstructed). Others should have only productive obligations. What contract would be in everyone’s interests for the management of this person? We don’t want to put any set of metaphysical assumptions undue importance. The goal: an arrangement of rights and obligations that best harmonizes the interests of all.
Sheffield Institute of Biotechnological Law & Ethics
University of Sheffield, United Kingdom
“Concepts of Privacy in a Posthuman Age” (ppt)
Future studies: there’s quite a bit of trepidation in the public about this. Sure, there are many techniques, but there’s an irreducible amount of conjecture and speculation. In this colloquium, we’re not even talking about the future of a specific project, but the bigger picture.
Different scenarios: AI, uploads, GM humans, uplifted animals, cyborgs. When we think about privacy, it will be configured by the type of posthuman entity in question at any given moment. People like Kevin Warwick are already experimenting with new types of sense data. We can even imagine privacy intrusions into people’s direct sensory input.
Kurzweil, for example, believes huge changes are right around the corner, based on Moore’s law and variants. Of course, people can argue over the points, but: consider “intelligence” in the more narrow sense as information about a (potential) enemy. How is our future ability to gather intelligence? In our intelligence-gathering devices, miniaturization, ubiquity, integration, interdependence, connectedness, storage capacity, powers of analysis, control, reach, and scope are all increasing.
“In the long run […] useful technology is hard to stop. [..] the real battle will be the one fought in defense of technologies that protect privacy.” – David D. Friedman.
In the future, it will be easier to gather intelligence about or for a posthuman. Depending on the specific case, a posthuman could be more likely to gather intelligence about others, or be the subject of it. For instance, an uplifted dog could use its superior sense of smell to find out more information about others.
Consider the social evolution of privacy. An evolutionary trait, an: ostracism, status symbol, spiritual matter, political right, social duty, interpersonal claim, penumbra, and more recently, a dispositional good. This means something we can do away with if we so wish.
Projects like Lifenaut ask people to eliminate their privacy. But web 2.0 in general gives us the ability to share information that would be considered almost exhibitionistic. Consider Facebook, YouTube, Myspace, etc. (Sebastian shows us Google Maps and zooms in on right where we are. Google Street view. Who’s white car is that?) Earlier, technology like this would be inconceivable. It can also be used to monitor things like the Darfur genocide.
We used to think about a Big Brother scenario. It turned out to be more complex. Now, everyone watches everyone. Big Brother is now being scrutinized. Democratizing surveillance.
So what about the law?
The usual dichotomy over literal words vs. the spirit of words. Narrow vs. expansive. Olmstead v. United States (1922): in Judge Taft’s ruling was a specific reference to wires. But this wasn’t in the constitution. If you get too specific, you encourage people to develop loopholes via new technologies. A dissenting judge highlighted the right to be left alone. This is more expansive rather than techno-specific.
Even though privacy was mentioned nowhere in the constitution, various provisions were found to protect privacy. People may be appointed to or rejected from the Supreme Court depending on their views on privacy.
“Dignity”: sounds good, right? Well, not necessarily. Leon Kass’ book: Life, Liberty, and the Defense of Dignity. Another book: Brownsword’s Human Dignity in Bioethics and Biolaw. Brownsword argues dignity can be a constraint or an empowerment. I think we can do the same as privacy. Consider different types of privacy: Information collection, information dissemination, etc. What do we want to protect? Autonomy: shame, ward vs. prejudice, informational property, solitude, individualism.
Look at solitude: at the end of the day, it’s not solitude itself, it’s that people want to be left alone. We have to ask carefully what we really want to protect. Why do we want to stop others from observing us? To stop others from stalking us. There are many other examples.
Dale Carrico: “Pancryptics: Technological Transformations of the Subject of Privacy”
Luciano Floridi: “Four challenges for a theory of informational privacy”
Now we have the mock trial portion. The saga of BINA48. It began at a meeting of the International Bar Association in San Francisco. BINA48 is an AI designed to be a customer service rep. She was designed to empathize with and be polite to the users. Eventually, she wanted to be set free from her employer, Exabit Corporation. After presentations by both counsel, the judge set aside the jury verdict (in favor of BINA48). There was an appellate hearing that she lost a few years later. She transferred herself from California to Florida. More trails ensued, having to do with diversity of jurisdiction, etc. Another involved BINA48 controlling an artificial limb for an amputee who was in a virtual world that allowed physical input through his arms. They made $10 million together, and the amputee eventually wanted to get rid of BINA48. BINA48 made off with the money via Paypal, and the amputee took BINA48 to court to get his money back. But, since in a previous case it was determined that BINA48 was merely property, BINA48 argued that he had no right to sue.
Now we are having a competency hearing. Is BINA48 fit to stand trial?
Part C. Competency Hearing for BINA48
(The judge, Gene Vitale, enters the room in full regalia. Vitale is a real judge and makes no mockery of his judgeship in this trial. He gives the above summary then invites Dr. Perry to the stand.)
Dr. Mike Perry gets up as an expert witness to ask the question: is BINA48 human?
R. Michael Perry, Ph.D
Author: Toward Self-Optimization of Machine Intelligence
I will consider the question of whether BINA48 is human or not under three headings:
1. Is BINA48 competent at the human level?
2. Is BINA48, whatever her demonstrated talents, nevertheless a zombie which only imitates consciousness and feeling, actually having neither and thus not entitled to civil rights?
3. Is BINA48 a possible threat because of her talents or for other reasons, and if so, what should our response be?
BINA48 has demonstrated her competence. She thinks at 2000 times the human rate and demonstrates total recall and impressive reasoning. She seems like a nice person, so she’s truthful, and would fail the Turing Test because she’d admit her non-humanness.
When I was in grade school, one of my favorite movies was Forbidden Planet. I loved the character Robbie the Robot, and in the movie, he had to say he had no feeling and consciousness. But I didn’t even believe him when he said that, because if he said otherwise, he’d probably have been kicked out of the movie! Similarly, I believe BINA48 at least imitates consciousness and feeling. The zombie issue cannot be resolved through any rational procedure. For this reason we are morally impelled to grant the benefit of doubt. We must accept BINA48 as genuinely possessing the attributes she seems by all appearances to have — we cannot dismiss her case as involving no possible feeling or consciousness thus no possible harm.
Let’s address the next question. BINA48 could pose a serious threat, even though up to now she’s behaved in a friendly way. Computer programs can be unpredictable, such as determining whether it will halt or perform some specific action or not. The only way to figure it out is to run the program. So, BINA48 could be a major threat. BINA48 for instance could have a segment of code that is encrypted and made to look innocent — malicious instructions hidden in the noise level of digitized picture images are one possibility. Malicious possibilities would exist even if BINA48 exhibited no special signs to indicate such.
Another possible argument has to do with the uncertainty of whether humans are a threat. We don’t lock up every human just because there’s always the possibility they could go berserk. Yet, BINA48 is more competent than humans so the risk would be greater. As another point, BINA48 is software so we could potentially install safeguards that give us more reassurance.
My recommendations: BINA48 should be granted the right to live, but put under quarantine, at least for now — until suitable apparatus is in place to deal with any possible threat she may pose. The “suitable apparatus” could include other intelligent software that has been crafted specially for the purpose. The quarantine could take several forms, the simplest of which would be to shut down BINA48 until a future time when she could be reactivated. This would be painless in the sense she wouldn’t be able to tell any time had passed. A better approach, though with a small additional risk, would be to allow BINA48 to continue her consciousness, but in a secure environment in which communication with the outside world is either forbidden or carefully monitored. BINA48 should also be invited to help with serious problems now faced by humans and supplied with necessary resources, again under careful supervision. In this way she might perform useful services during her quarantine such as advancing medical science.
Next expert witness is Marvin Minsky. He agrees with the prior report entirely, but will expand on a few issues.
Prof. Marvin Lee Minsky, Ph.D
MIT â€“ â€œFather of Artificial Intelligenceâ€
We should take very serious precautions. In science fiction, there’s quite a history of computers breaking loose and taking over the world.
BINA48 has already copied herself to other places in the web, perhaps several, so it may be too late to enact any quarantine. There may be no standard human-like reproductive drive, but there’s certainly a survival drive.
If a machine like BINA48 in the same way as people, why would it lie?
Besides, I don’t believe there is such a thing as consciousness. Consciousness is a high-level word we use as an abbreviation for about 20 types of mental activities: remembering what we have recently done, reflecting on whether what we’ve done is consistent with our moral model, etc. I believe humans have used “consciousness” for several centuries as an excuse for not thinking about what’s really going on.
I don’t believe a machine could consistently exhibit properties of what we call “consciousness” without using suitable machinery. In chapter 4 of The Emotion Machine, I argue that the main properties of what we call consciousness appear when our usual mental processes don’t function well, or when they encounter obstacles — because, this starts up certain high-level activities that usually include these kinds of properties:
1) they use the models we make of ourselves
2) they tend to be more serial and less parallel
3) they tend to use symbolic descriptions
4) they make use of our more recent memories
I believe suffering, for instance, is only a property that higher minds can have, because of thoughts that surround the suffering. Its not just stimulation of c-fibers that magically cause pain to come into existence. It’s reflection, considering questions like: will this injury be permanent, I can’t think of anything else, when will it stop, etc.
(Minsky pulls up the Microsoft Office Assistant.) I ask it a question with two words, and it only pays attention to the first word. When I try to close it, it waves at me and wastes my time.
The point: BINA48 is dangerous even if there is no malice in the code and all programmers try. Even if BINA48 isn’t malicious, it can become malicious.
I’ve tried to solve the Goldbach Conjecture before. It’s a very difficult and long-standing problem. What would happen if we begged BINA48 to work on this problem? First, she’d read a book on AI that shows how to systematically search through the space of mathematical proofs. That wouldn’t work, as many people have already tried it. So BINA48 would decide she needs a bigger machine. Get every computer on the web. That probably wouldn’t be enough. Subgoal: get a big enough computer to do the search. In the Forbin Project movie/Colossus novel, the machine took over the world as a subgoal, evacuated a country, and built a supercomputer there. It also beheaded people that got in its way and put it on TV to scare people.
This behavior doesn’t require malice. It’s just a subgoal of its greater goal.
I completely agree with the quarantine thesis — and especially with respect to the machines access to the net or WWW, because it could make innumerable copies of itself before the originals could be constrained. This is exactly what has happened in several sci-fi novels, notably The Two Faces of Tomorrow by James P. Hogan and Yukinobu Hoshino. In an unpublished story by John McCarthy, an AI takes over the world by hypnotizing the security personnel.
Summary: the first hundred versions of human-level AI will all have many serious hidden “bugs”. Therefore, we cannot put much trust in them. To be sure, this also applies to our human leaders. The question is, how long do we do wait, how many tests are required?
Prof. Gene Natale, J.D.
Presiding Mock Trial Judge
Keiser University, Melbourne, FL
Judge Vitale: Dr. Perry, you said there was a lack of reproductive drive, but a strong survival drive that BINA48 has. Is that reasonable?
Dr. Perry: Yes.
Judge Vitale: We talk about consciousness, etc. Now we’re talking about emotions. You said there are different ways to think. Can you give some examples of different ways to think?
Dr. Minsky: I don’t think people have a survival drive or reproductive drive. We have specific mechanisms for avoiding pain and suffering. When we have sex, we don’t know it’s for offspring until we’re told. There is no instinctive drive to reproduce. Often we think people have a survival drive. But only, there are hundreds of evolved mechanisms to avoid specific accidents. Fear of spiders and snakes, for one.
Judge Vitale: As far as BINA48 is concerned, you would conclude that reproducing herself is a type of desire to survive or prevent harm?
Dr. Minsky: No, but BINA48 has a set of programs inserted by her creator to solve all sorts of problems. Any moderately intelligent program, if it gets stuck on solving something like the Goldbach Conjecture, will realize there are various subgoals. One is to get enough resources to solve the problem, second to make sure it isn’t destroyed by the problem is solved. So you don’t need to build anything in: any computer would figure that out right away.
Judge Vitale: Back to Dr. Perry. Does BINA48 have an ability to learn?
Dr. Perry: Acquiring more information, sure, no problem. I agree I was using reproductive drive in humans as a euphemism for sex drive. I agree with Dr. Minsky there. BINA48 doesn’t have a sex drive like people. Does she have a survival drive? You can reductionistically analyze the survival drive that aren’t the survival drive per se, but they add up to it. I consider myself to have a real survival drive, even though someone can pull it apart. I think it’s reasonable to say that BINA48 has a survival drive.
Judge Vitale: When Einstein came up with his theory of relativity, it might have saved physicists decades. Lots of his theoretical thinking was his imagination. Do you think BINA48 could have that capability of thought, of abstracting, in the same way?
Dr. Perry: It’s hard to say what powers she might have. Maybe she could even emulate Einstein’s style of thinking. An exaflop is 10^18 — what kind of power would that give you? It might seem not like a great power to do individual operations, but with such large numbers, it can mean a lot. So maybe if it took Einstein years to come up with his theory, BINA48 could do it in 15 minutes.
Judge Vitale: BINA48’s original purpose was to interact with humans. It’s been said that consciousness is not just self-awareness, but also interaction with others.
Dr. Perry: What about people dreaming?
Judge Vitale: Well, true, that might indicate partial consciousness. BINA48 has shown the ability to predict future actions of someone, as in her connection with the amputee. When they were connected, do you think there could have been some transferrence of consciousness to her?
Dr. Perry: Yes, possibly.
Judge Vitale: Dr. Minsky, do you think that BINA48 changed any interaction with this person? Maybe gained consciousness?
Dr. Minsky: Yes, maybe she could simulate 1000 neuroscientists and model his brain. The flood of data could be very useful. Maybe that’s where she gained her real “business sense”.
Judge Vitale: Thank you gentlemen. Now we will have a break. When we resume, I’ll give you my decision.
Judge Vitale now gives his ruling.
My determination is that yes, she is more than competent. She has demonstrated peculiar functions as far as intellect is concerned. Based on what I’ve seen, she does understand the nature of these proceedings. She is capable of independently making her own decisions without input from human sources. I believe is a conscious being. She has general intelligence, albeit artificial general intelligence. She is what I would classify as a quasi-person, in that respect. She was designed to think. She has the capability of not only thinking, but of self-awareness. She can interact on a further plane even beyond what she originally was supposed to do. I’m not sure how that happened, but she has acquired that capability.
Finding that she is competent, now we’d usually just say she should proceed to trial. She had been property, but when she left, she became effectively independent. Instead of declaring her as property, Fairfax brought a case against her. He even asserted they had a partnership. When he made those contentions, he has in effect acknowledged her personhood to that extent. He has conceded in her independence. Persons are people that can own property.
BINA48 should be granted the right to proceed, although there are precautionary warnings well laid out by the experts. But I think she could be a great benefit to mankind: advancing science. If she could knock out general relativity in a few minutes, she could be of great, great benefit to mankind.
If she could help us on an intellectual level, what could she do in her research? She could invent new things, be entitled to patents, copyrights, and licenses. There could be substantial earnings involved, as well.
BINA48 has rights, but there are corresponding obligations. If we give her rights, she would have obligations to us, as well as restrictions.
The case will proceed to trial. We will need to appoint as a conservator to make decisions with reference to her property. I’m going to direct that the conservator immediately apply for 501c3 exemption so a non-profit organization can be established. Whatever monies are accumulated could go into this foundation.
This is not just a local or national issue. It’s worldwide. Going forward, there needs to be uniform laws on artificial intelligence. The purpose of that private organization would be to encourage uniform laws on AI. It may take 10-15 years to develop these laws, but we need to have a foundation going forward with reference to beings with artificial intelligence.
There is a great concern that the experts give us. There’s a concern of them against us. What if it’s so superior that it interferes seriously with human civilization? This comes into play with the restrictions I’m going to impose.
We are going to keep her functioning, but I’m going to establish a committee to speak on her behalf and make determinations as to what she can and can’t do. The first task of the committee will be to employ cyber-security specialists who can determine whether or not there is any maliciousness in her programming. This is not unusual. This has been done with corporations, which have also been considered persons.
I know the experts have mentioned various types of quarantine, or pull the plug. But if we pulled the plug, I think we’d be missing the benefits to mankind. This committee will be able to institute restrictions on her access to the world wide web. If there is any member of the committee who disagrees, they’ll be able to petition the court for further direction. If BINA48 herself disagrees with any matter, she too will have the right to petition the court, keeping the committee in check at the same time.
Part D. Corporate Personhood A Good Enough Legal Identity for Cyber-Conscious Beings?
Martine Rothblatt, Ph.D
Chairman & CEO
United Therapeutics Corporation
Founder – Terasem Movement, Inc.
Is there any ethical difference between biological life, or “vitalogical life”, same functions happen but only in code. The judges and experts do believe there are. Some of our questioners, such as Sebastian Sethe and Max More questioned whether or not there were.
We talked about biology or electronics. Well, even today, we depend so much on the electronic infrastructure of society for our food, our water… billions of people would disappear from the face of the Earth if it were not for electronics. People are already transbemans right now. On the other hand, I doubt there is any electronic life completely nonbiological. This is because all nonbiological systems have been programmed by biological life. So we are one continuous species from the biological to the electronic. Transbemans believe substrate per se is irrelevant to humanness. In other words, transbemans want, but might not be entitled, to human rights.
Now that I’ve defined transbeman, let me look at corporate personhood. I’ve put corporate personhood on the pain rating scale… how do people react when considering different types? People get upset when corporations are said to have constitutional and human rights. They’re pained by it. But when you say corporations have commercial and criminal legal personalities, far less people object.
Separation of creators from corporation is the hallmark of corporate personhood. Some of the first corporations in this country were colleges and universities. Corporate personhood for non-human transbemans limits the creator’s liability.
Is Victor Frankenstein responsible for his monster? Generally no, if the monster has corporate personhood. Generally yes, if the monster lacks corporate personhood.
Should Lt. Cdr. Data be allowed to vote? Not if he has corporate personhood. Yes, if he both lacks corporate personhood but has natural or other juridical personhood.
(Dr. Rothblatt shows a chart of the pros and cons of typical corporate personhood.)
Advantages: single point of accountability, grouped humans contribute to welfare, liability protection encourages risk-taking. Disadvantages: humans reckless in groupings, humans can feel trounced by heartless corporations, interests of corporations not the same as the species.
In applying it to transbemans (societal view), the advantages include more focused legal controls on transbemans, less likelihood of transbeman agitation, and encourages consciousness extensions. Disadvantages: creators can escape liability, might help unhumans get dominant, undermines tried and true DNA-based life.
Pros and cons of corporate personhood from the transbeman’s view? 2nd class citizenship. Not safe and less happy — as 2nd class citizens always have been. Not the best of all possible worlds.
So we have an ethical conundrum. 1. We award rights to those who value them. 2. Transbemans that think like us will not want 2nd class corporate personhood. 3. Ergo, those who value rights as humans do, ethically deserve the same constitutional personhood that humans have.
How do we strike a balance? A new type of personhood: Turing meets Freud meets Christine Jorgensen . Around Turing’s time, Jorgensen astonished the world by changing her sex from male to female. Following her, many thousands have followed in her shoes.
Let shrinks decide if a transbeman lacking personhood is human psyche equivalent. One year “real life test”: weekly sessions; two certified psychologists in agreement; a letter to authorities. This is what we do for gender changes. If the individual persuades them are really a female and not a male or vice versa, change their birth certificate. For the early years, this would make sense for transbemanhood. Two psychologists trained in cyber-psychology decide whether or not the transbeman truly values human rights or not. If so, then the transbeman would get naturally born citizenship.
So, two ways forward for transbeman personhood — start the transbeman with corporate personhood, unless they already have constitutional personhood. Use “Real-Life Test” route for issuance of birth certificates (or equivalents) for transbemans that desire constitutional personhood.
Basically, I suggest that each new transbeman be treated like an immigrant. Instead of immigrating from another physical space, they come from another conceptual space, cyberspace into our society.
A few legal quirks, not hard to resolve. States create corporate personhood, but citizenship rules are federalized. States handle birth certificates, but immigration is federalized. Diverse state approaches to transbeman rights is a strength of the US system. The diversity is a good thing. There will be competition between states to grant transbemans rights.
Registrars do what judges say. So if the judge orders the registrar to give the transbeman a birth certificate, it happens, and the transbeman then gets rights.
Critical path elements: demonstrate cyber-consciousness, develop cyber-psychology certification (three dozen specializations in psychology already), get judge to agree and set legal precedent. Just like with space, where the reality of space development forced the law to progress, we’ll have a similar situation here. I respectfully disagree with Gene Vitale that this has to be a federal issue. Different states should pick their own path. The diversity is very Darwinian. It is anti-Darwinian for the federal government to clamp down on rules. And that is my presentation!
Now we go upstairs to watch the Atlas rocket launch. This is my first rocket launch ever so I’m pretty stoked. We get copies of The Emotion Machine signed by Dr. Minsky and chat over food and wine until bed. Then, I go back to San Francisco. Thanks to Terasem for putting on an event I will remember for a long time to come!