I’m in Melbourne Beach, Florida, for the 4th Colloquium on the Law of Futuristic Persons. As one might assume from the title, this is a legally focused gathering, and addresses legal issues related to cryonics patients, cyborgs, artificial biological intelligent beings, enhanced human beings, and artificial intelligences. I’m attending to record the whole thing and summarize it for those that aren’t here. Our gracious hostesses are Martine Rothblatt and Bina Aspen Rothblatt, who covered all our expenses and put us up in a nice hotel on the beach, with fantastic views of the Atlantic Ocean and the palm tree-covered residential areas of Melbourne Beach.
Martine Rothblatt, our hostess, opens us up with an intro to the conference. She’s wearing video eyeglasses invented by the famous Steve Mann. They’re continuously taking video and beaming it to the Internet, an example of sousveillance. She’s also showing us how it can be beamed directly to the big LCD screen at the front of the room. Pretty cool… I think I’d want these if I were a political protester in danger of being attacked by overly enthusiastic riot cops.
As Martine explains, this colloquium was inspired by the long-running colloquium on the Law of Outer Space, which began in 1958. She sees a connection between space law in 1958 and human rights of futuristic persons right now, in that they are both incredibly cutting-edge in 1958 and today, respectively. In 1958, the experts decided that some things that were taken for granted, like national borders, had to be tossed out in the face of the new technology. For instance, if a space probe is orbiting the Earth, it will violate the “airspace” of many countries whether they like it or not. We may have to discard similar assumptions to come up with a serious legal framework for futuristic persons. The point of this colloquium is to move forward the law on these new areas, as the law must evolve together with improving knowledge. One crucial area is that personhood should be regarded based on intelligence and values, rather than substrate or superficial appearance.
This colloquium could go on for a long time — 10, 20, 30 years. It won’t be done overnight, but the point is to move forward the law and ensure that the rights of futuristic persons are duly protected by the legal system as they are created.
John P. Didon, Esq. – Arguments Supporting the Legal Rights of People in and Revived from Biostasis
Now, Martine invites the first speaker, John Didon, to the podium. He has contributed more than any other attorney to the rights of people in biostasis. As many of you may know, people in biostasis are sometimes at risk of their rights being denied, treated as objects rather than the people they are.
John points out that there is no legal status of suspended persons, so any discussion of such is merely (unfortunately) of ideas. Historically, people have tried to do things like will money to themselves for revival after biostasis, and these efforts have run into trouble with family members.
John quotes William Goldman: “Nobody knows anything”. Law is based on precedent, which is both a blessing and a curse. Because cryonic suspension is so new, there is little precedent, but we have to try to shape the law by being there first. He will present four arguments to support shaping the law in favor of persons upon their suspension.
First, the suspended should have the same rights as the other dead — to pass their assets to who they want, burial rights, and wrongful death statutes. However, it’s not so simple that these rights will be given. Oftentimes, the family wishes and public norms can override the wishes of the decedents. So we have to be out there trying to shape the law in favor of the suspended.
Second argument: individuals in biostasis should have concrete rights that are not “abstract moralisms”. The questions should not be left up to the state legislature — they are so central and personal that they are connected to the liberty connected to the 14th amendment. He references Planned Parenthood v. Casey as an example of such an abstract moralism. Not just applicable to abortion rights, the liberty described in the 14th amendment can be connected to the legal rights of cryonically suspended.
Third argument: If a child is born from the frozen DNA of the decedent, born from the surviving spouse, that child can inherit. There are various legal arguments for why this should be: genetic relationhship, consent by the decedent to posthumous conception, and 3) decedent aagrees to support the child. What does this mean to cryonically preserved people?
Genetic relationship between the revived person and the preserved? Yes. Preserved people (obviously) consent to be revived. The cryonically preserved person has also arranged for his support. So, there is an analogy between the two that can be used.
4th argument: changing definitions of death. Not heart stopping, but brain death. There are court cases that support this. Under current medicine, true death only occurs when the brain dies. Cryonically preserved individuals still maintain a level of brain activity, so they may qualify as persons. Dr. Rothblatt has discussed the concept of bemes — digital copies of important brain functions (mindware). If the judiciary is willing to alter the standard of death to whole braina death, then this can be an evolving standard. As technology improves, our conception of “death” may change yet again. In 1981, a presidential commission admitted that “people’s attitudes towards death evolve, and changes in medical capabilities certainly come to be reflected in public as well as professional circles.”
IT may be possible to use dynasty-like trusts to protect the rights of the preserved. By designing a wealth preservation trust for cryonically preserved persons, we may be able to protect the property and rights of those suspended. We can design this in a way that accords with the mainstream body of law on the topic as much as possible. A few states aren’t accepting of dynasty trusts, though: South Dakota and Delaware, for instance.
There are some ways to minimize the legal challenges which may destroy the trust — in terrorem clauses, which disinherit those beneficiaries who challenge the trust, provisions which deal with the dissolution of the trust distribution if it is evident revival is not possible, and descendants or charitable beneficiaries which may receive limited use of the Trust property.
Lori Rhodes — “Cryo-Documentation: Vital Statistics”
Next, Lori Rhodes, who also organized this whole event, will present on asset preservation for cryonically suspended individuals.
She begins by thinking about the classification of cryopreserved people. There is an African fly that can go into a state of preservation – crypsis – and it can survive for 17 years in this state during conditions of drought. With regard to us, it’s something that we’re doing consciously that already exists in nature. So, it’s not as crazy as much of the general population thinks. In the classic “What is Life”, Erwin Schrodinger defined living matter as that which “evades the decay into equilibrium”. In THe First Immortal, James Halpern states that cryonic suspension halts all decay, therefore, under Schrodinger’s definition, it can be regarded as living matter.
What about frozen embryos? If they are deserving of rights, then cryonically preserved people should have the same rights too. They should not be considered inanimate objects with no rights whatsoever when an embryo would have rights.
Next, she began to research vital statistics: birth and death certificates. They were last revised in 2003. There’s always a bit of leeway in the information you can put into those forms. I believe that you should be able to put down in your death paperwork which cryonics company you give your own body too. There needs to be a way to properly document cyronic preservation. I propose two different, yet standard death certificates — a Irreversible Cessation of Life Certificate, and a Reversible Cessation of Life certificate, applying to non-suspended and suspended persons respectively.
To move forward, we could petition the panel that meets to evaluate the US Standard certificates. Lori contacted a member of the previous panel and asked how certification of revival from biostasis could work under the current system. They took it seriously and are passing it on to the entity that will convene the next panel. Lori shows us an example of how standard death certificates could be changed to accommodate suspended persons.
What about birth certificates? What means could we use to record people revived from biostasis? A re-birth or revival certificate? Lori shows us her example for a US Standard Certificate of Revival. Name of facility of the revive, county of the revive, etc. She created as an informal example it by modifying her daughter’s birth certificate.
Lori reviews the lataest revisions to the Uniform Anatomical Gift Act (the legal framework for current suspensions). These include empowering a minor to be a donor, establishes criminal sanctions for falsifying the making, amending or revocation of an anatomical gift, allows for e-records and signatures, permits an individual to sign a refusal barring anyone from making an anatomical gift of the individual’s body or parts, the survives the revocation of a will.
She began thinking in more detail about this, how there are do-not-resuscitate orders for doctors, medical staff, etc., for people not to have CPR performed on them. Could the same framework be extended to protecting cryonics patients? It seems probable.
There should be a way to suspend social security numbers for cryonics patients instead of allowing them to go back into the pool for new recirculation. At suspension, the SS# could be given a -B (dash B) designation to put it on hold. Then, upon revival, the Social Security number could be restored to the patient upon revival, given a -R (dash R) designation. This is not so far-reaching, as social security numbers have been restored before. As an incentive to the Social Security administration, this would allow them to keep the $255 that they would otherwise have to pay out the standard spousal death benefit. (So small anyway, but they still make it difficult for people to collect on it.)
The same thing could be done for health insurance under the Social Security Act. What about driver’s licenses? Maybe it would not be a great idea to immediately restore that, but they should be allowed to apply for a non-motorist ID.
What about adverse selection within life insurance? This prevents people from committing suicide to get life insurance money to their family. It would make sense to institute a similar clause for “cryonicide”, where a cryonics patient is killed or arranges their own death for tax benefits at a specific time. For instance, those that die in 2010 will make their heirs eligible for the least taxes. Instead of the date of “cryonicide” being the documented death date, it should revert back to the time of suspension. Furthermore, there should also be penalties for those that purposefully damage cryonics patients — it should be considered a criminal act. They remove the possibility of revival for the patients, denying them rights.
Finally, she sums up the presentation with a couple quotes:
“The man who has the time,the discrimination, and the sagacity to collect, and comprehend the principal facts and the man who must act upon them must draw near to one another and feel that they are engaged in a common enterprise” — Woodrow Wilson, 1910
“Tell me and I’ll forget
Show me and I may not remember
Involve me and I’ll understand”
– Native American Proverb
Michael Perry, Ph.D — Legal Aspects of Forever for All
Next, Mike Perry will talk about the legal aspects touched on in his book, Forever for All. Martine introduces him by praising his book and the moral framework it provides for cryonicists. He also wrote the book, “Self-Optimization of Machine Intelligence”. Mike Perry is a pioneer and legend in the cryonics community, and a patient caretaker at Alcor, and a member since 1984.
Mike admits that he doesn’t have a legal background, but will discuss elements in his wide-ranging book that touch on legal issues. His book ranges coverage of the present day to the distant future. He will talk about legal issues connected to:
1. Current cryonics practices
2 Uploading and other anticipated technologies that might accompany reanimation
3. “Compassionate” handling of sentient creatures in a more advanced future.
Regarding #1, there is the challenge of getting a good cryopreservation. For instance, people might want to cryopreserve themselves prior to decline into dementia or the like. However, today, this isn’t possible. Unfortunately, during any case of “suicide”, there is a mandatory autopsy, which obviously would ruin the whole effort.
Mike presents an example of Arlene Fried, cryopreserved in 1990, who had a a brain tumor (but in a good state of mind), and self-dehydrated to deanimation, requiring about 12 days. Considered a very good preservation for its time. But, this is very extreme. Mike shows us a harrowing image of Mrs. Fried immediately after her deanimation by dehydration, with the caption “Very lucky lady, maybe”. She was able to go into cryonic suspension before a brain tumor destroyed her brain.
Today, a major challenge in cryonics is that cryonics patients are considered completely dead, which deprives them of any rights. Hopefully, with evolving definitions of death, this will change. An audience member asks about Oregon’s assisted suicide laws, whether they might be used to deliberately enter into biostasis. Mike says that there would be a lot of complications with this — particularly, ensuring that they are not autopsied. But, it’s never happened yet, so it will need to occur before anything can be said on this.
Next section of the talk is titled, “Uploading and So Forth”. He presents a definition of the Singularity, “where machine intelligence exceeds present human levels and technological advances would be very rapid, is considered realistic by many thoughtful people and not too distant on the scale of history”. Among the possibilities would be to express personalities at the human level in artificial computational devices, and uploading. These possibilities present numerous legal issues.
Legal questions: are cyber-persons “persons”? What if duplicate persons are created by accident? What about “asking them to accept deletion”? If “no”, then what about property/ownership rights, etc.? Should “reproductive rights” be highly restricted, particularly since it may become very easy to create “designer persons” cybernetically? In most jurisdictions, people can have as many children as they want. But what about the future? What about persons created in such a way as to have a “past-identify” claim — replicas of previously existing individuals in one or another parallel timestream, say? (Mike writes, “Not as hard to do, in principle, as one might think!”) What legal issues would such resurrectees have? If most people in the future are extremely enhanced and intelligent through technology, would revived individuals be considered like infants or children?
The third issue: ending animal suffering. Animals in the wild seem “born to suffer”, but this needn’t be that way forever. They struggle through a brief existence and arae killed, either by predators, aging, diseases, or numerous other causes. Predators in particular regularly destroy other sentient creatures who fight back and show expected signs of pain and terror as they die. Mike thinks that people in the future will be indifferent to these issues, nor should they be. Maybe in the future, there just won’t be any more predatory animals. On the one hand many of us have a reverence for “nature” but still, watching a lion dispatch a zebra is not particularly heartwarming. In the future we should use advanced technology with accompanying legislative enforcement to end animal predation and other causes of suffering, even if it meant radically impacting the environment or even relegating all sentient, unintelligent life to a “safe haven” such as a cyberworld?
Wendell Wallach – Moral Machines: Teaching Robots Right from Wrong
Wendell is a leading bioethicist with the Yale Interdisciplinary Center for Bioethics. He will read from his new book, Moral Machines: Teaching Robots Right from Wrong, published just a couple weeks ago. The cover of the book is an image of a human hand shaking a CG rendered hand, reminiscent of the old SIAI logo. He begins by reading various fictional headlines like “Robots March on Washington for Civil Rights”, then brings up how Ray Kurzweil and Hans Moravec are predicting uploading and human-equivalent AI, on timelines of 2020 to 2050, based on a computational theory of mind and the continuation of Moore’s law. Legal scholars are already debating whether AIs will ever be given rights. Policy planners are thinking about whether we should regulate technologies that change what it is to be human. More and more papers are being written about programming moral decision-making faculties into artificially intelligent systems. Wendell remarks that this (what he read) is the introduction to the last chapter of the book — the more futuristic stuff.
Do we need to think about ethics for robots (an inclusive term for AI and virtual/physical bots). Yes, beginning now. Robots are already making decisions that effect humans good or bad. Initially in very limited areas, these will quickly expand. Several ethical questions: Does society want computers and robots making important decisions? This gets into issues of society’s comfort with technology. Are robots the kind of entities capable of making moral decisions? The bulk of this book looks at how we can make ethics computationally tractable, something that can be programmed in today or technology with the very near future. Not just predictions that we will have human-level computers.
We break the area into three subjects. Top-down approaches: Asimov’s Laws, Ten Commandments, utilitarianism, etc. Bottom-up approaches: inspired by evolution and developmental psychology. Not an explicit notion of what is right and good, but developmental. Third area: Superrational faculties. Is reason enough to get robots to make moral decisions, or something more? Are embodiment, emotions, consciousness, or theories of mind necessary? This looks at such an inclusive area of ethics that it is fascinating in its relevance to human ethics as well. Once we’ve granted personhood to corporations, it isn’t a huge leap to translate personhood to machines, so that will also be relevant.
The section he will read from: “embrace, reject, or regulate”. Political factors will play the largest role in accountability and rights for robots and whether some forms of robot research will be regulated or outlawed. Companies developing AI may be concerned that they will be susceptible to lawsuits even when they make life safer. Peter Norvig brought up the issue of whether the companies making AI-driven cars will be susceptible to lawsuits even they reduce the incidence of crashes. Some cases will have merit, and some won’t. Obviously, the legal situation will evolve. Two questions will arise: can the robots themselves be held directly liable, and do sophisticated robots deserve recognition of their own rights. Though these are considered futuristic questions in the context of the book, it should still be briefly looked at.
Humans have historically been punished through various ways: infliction of pain, depriving them of freedom, confiscation of property, etc. Debates about whether robots should be held accountable often focus on whether the usual human punishment methods would even work on robots. If you think that artificial agents will never satisfy the conditions for being punished, then you might consider the idea of punishing them as a non-starter. Part of the book addresses whether or not robots could have pain or distress — would they have a somatic architecture that allows them this? In the book, we look at the connection between animal rights and robot rights. If we move in the direction of somatic architecture for robots, we may also get review boards for robotic experimentation.
Regulating the treatment of robots and research is not the same as assigning them rights, but it gives them a toehold. Robots may be programmed to demand energy, but how is one to evaluate whether they truly desire goods and services. If a robot begs you not to turn off, what criteria should you use to evaluate whether the plea should be acknowledged?
What about increasingly sophisticated robotic sex toys? Most governments don’t regulate them. What about the right of humans to marry robots? This has been addressed in fiction. Shifting social attitudes towards marriage could lead to humans regarding sophisticated robots. A book, Love and Sex with Robots, looks at this. Unlike other issues involving robots, humans would have a direct interest in marriage, and could be the first path through which legal rights are assigned to robots. This could lead to robots possibly being granted the usual rights that spouses have. However, before this even happens, sophisticated robots could be banned.
Ultimately, a lot of people that read the book realize that it’s just as much about humans as robots. To what extent is human moral decision-making robotic? As Rob Brooks said, “we may over-anthropomorphize humans”. That is, we may think that we have more god-like moral decision-making procedures that we actually do. Maybe our morality is slightly more “robotic” than we think. Robotics will be a test bed for learning about human morality.
Next there is a slight segue into the phenomenology of consciousness — we don’t have a science of it. We will learn about this in science of the future. Emotions and feelings are a fundamental part of this as well. To what extent could they be reproduced in robotics? This should also be an important question for anyone interested in uploading.
Wendell mentions that AI decision makers might be even better than humans at moral decision making, for two reasons: 1) absence of disturbing emotions, 2) ability to look at more options. As an advocate for the creation of Friendly AI, I totally agree — but doing such a thing will obviously take an enormous amount of work by programmers and theorists.
Is utilitarianism a good idea? What about the trolley problem? Even people who say they’re strong utilitarians, you can always propose a dilemma they’re uncomfortable. Say there’s a train accident where five people are in the emergency room, they each need an organ transplant, and there’s a healthy person in the waiting room. Should that person’s organs be removed to save five lives? (Lori Rhodes: “The moral of the story is, if you’re healthy, don’t go to the hospital.”) Most utilitarians would say no, showing that utilitarianism can violate basic intuitions.
There are two main camps of morality: utilitarianism and deontology. The theory of the latter is that some of our moral intuitions are reflective of moral truths. There are “categorical imperatives” that demand we honor them. Others say that these intuitions are products of evolution or culture, not actually moral truths. Therefore, what’s right or good is really predicated on some other principle, like the greatest good for the greatest number, with a proviso for the individual rights of everyone. This is where we are at morality in this moment of history. It’s a fascinating debate being contributed to by neuroscience, evolutionary psychology, and many other areas.
Professor Steven Mann — Keynote Address: “Case Study: the Human Rights Travails of Professor Steven Mann as a Partial Cyborg in 2008″
Steve Mann takes the podium. He is projecting the view from his eyeglass cam (specifically, an EyeTap) onto the huge screen at the front of the room — it’s pretty surreal. For his “slides”, he is writing on a notepad that we can see only because he is looking at it and it is projected on the screen. He starts by saying his talk will be about the idea of a “reverse transbeman” — a human trying to become more like a computer rather than vice versa. He himself is the best recognized such person, famously called “the first cyborg”.
His EyeTap uses a mirror and a beam splitter to project images taken from his glasses-cam to his own eye, thereby actually letting him experience the world as computational. It records sound as well as video, and he refers to it as a third hemisphere of his brain. He draws a diagram showing his interaction with his wearable computer as a feedback loop, also mentioning the possibility of overlaying, say, Google maps onto the video. Some of his students have done their Ph.D theses on such projects, building functional prototypes. Many of the fundamental challenges of have been solved.
Augmented reality with overlays is one possibility, but another is using a computer as an encapsulating intermediary between us and reality. That second possibility is the most powerful.
What happens when a human is endowed with computer capabilities? Professor Mann began using his eyeglasses as a seeing and memory aid. Most of the technical problems have been found after 30 years of work — now, the problems are social, legal, and ethical instead. He says that his eyeglasses camera, but causes an eye to become a camera. He’s had challenges with getting into department stores that forbid photography — but should it necessarily be called a recording if it has become a part of his very essence?
Professor Mann is part of a condominium, and was asked to remove his glasses to attend a meeting associated with it. Since his eyes are so adapted to the eyeglasses, he might trip and fall if he removed them, and he asked those who requested to remove them if they would accept liability.
Electronic devices are not allowed at the US Embassy. Therefore, people with pacemakers or other devices become contraband. Existential contraband. He ran into this problem when trying to pick up his daughter’s passport from the US Embassy. He was in the odd situation of being both required to enter the Embassy and forbidden from doing so. This was finally resolved by a staff member coming out onto the sidewalk and serving him the passport there.
Professor Mann draws an image of a camera on a telephone poll, and a person below watching it, labeling them “surveillance” and “sousveillance”. Surveillance means “to watch from above” in French. Its reciprocal is sousveillance, meaning “to watch from below”.
Heard of a Panopticon? What is the inverse of that concept? We sewed a bunch of security camera-style black domes onto conference bags for CFP2005 in Seattle. This is a reverse Panoptican. Some of them had cameras, some of them didn’t, and others just had flashing red lights, but no cameras. In a conventional Panopticon, prisoners have to be on their best behavior because they may always be watched. In an inverse Panoptican, the guards have to be on their best behavior because they are continuously being watched by the prisoners.
Originally, he just tried to create vision aids for the blind. Eventually, he became a “cyborg performance artist”. He’s not an activist, but he got drawn into such things. He shows us a hilarious bra with black security camera domes on it, showing us the inversion of the “male gaze” of security guards. A security guard couldn’t really ask a woman to remove such a bra.
Souveillance is not just people photographing police. It’s also the recording of an activity by a participant in that activity. Third-party recording of a phone conversation is surveillance, but recording your own is sousveillance. That is the legal definition. In many states, the former is illegal, but that latter isn’t. This definition removes the notion of it always being an “Us vs. Them” framework of watchers and the watched competing against each other.
He calls his device an electrovisualgram because it also has EEG and EVG on it. He considers the data they record as his property, a closed-loop mindfile, glog, or cyborg log. There is a community of 30,000 open member gloggers on the glogger.mobi website. This is also a new type of social networking, a web 2.0 phenomenon.
He is working with a visually impaired person to implant the entire system into his eye socket in a self-contained fashion. If it’s entirely inside the body, is this still an internal device?
He had a bad accident where his old eyeglasses were destroyed during a movie production scene. He was directed by a guard to walk over live electric wires. They were greasy, and he slipped, and it fried the system. A second guard said he shouldn’t have walked that way. He returned to the site of the accident to record it, but a third security guard then physically assaulted him for filming the incident on a public sidewalk. Thanks to the glog, this was all completely recorded. Imagine if all the world’s leaders were all wearing these systems and could see the lives of the others. This could resolve a lot of tensions.
An audience member, Rudi Hoffman, asks him if he dreams in the same way he sees things with his eyeglasses. He came up with something called Dremes – ideas that come from dreams. Most of his inventions he has while dreaming. He had a dream that he was playing in a fountain that was a musical instrument, and ended up building it: a hydrophone. He shows us a magazine with him on the cover playing the instrument. If you wake up with a dream and don’t move, you can go back to the dream again when you go to sleep. His recording devices allow him to take verbal notes of the dream and then reenter it. As such, he sometimes calls it the Dream Machine.
During the question session, he remarks that he was the first person to put his life on the web. Many people follows in his footsteps otherwise. He allowed people to scribble on his retina and modify his impression of reality, as a way of communicating. It got congested, so he came up with the idea of a sight license, including different protocols for partitioning space in his field of vision and interacting in groups.
An audience member poses the idea of him accidentally recording a person with a child who wants their location to be unknown to a former boyfriend or spouse. Professor Mann makes a distinction between acquisitional and disseminational freedom of data sharing — they can be distinct. The implication is that he should be allowed to acquire data as much as possible, but not necessarily disseminate it.
An audience member, Gene Natale, asks if one could ever record dreams using such a device. (A topic I’ve written on before). Professor Mann doesn’t know, but he considers it quite hypothetical, whereas he usually focuses on real technology. Gene also asks if he’s given up any of his rights by wearing the device, such as giving up his right to enter the US Embassy. He said you could argue that he has a right to have a hearing aid, and that as more people wear these things, acceptance will rise. People will agree that there is something wrong with an organization that is afraid. People who use surveillance argue that if people have nothing to hide, they should accept it. Why not turn that around, and ask, if the organization has nothing to hide, then why can’t we film them?
Professor Mann remarks that he’s witnessed a lot of accidents, and has come to the aid of these parties by helping them remember what happened. He mentions that he and his students developed a facial recognition system that pops up a remark on previous times you saw that person. Like a search engine, it gives up a number of “hits” on past sights of that person. Rudi Hoffman mentions that cryonicists like the idea of monitoring their vital signs, and if anything he is working on is relevant to that. Professor Mann is excited by the connection, and remarks that another of the reasons he wears his eyeglasses so people will know at least know what happens if he ever gets into a terrible accident.
Should cyborgs have lower insurance premiums because they’re less risk? The same applies, for, say a a donut shop with cameras. 30 lawyers in the law department at his university are working on the problem right now with a $4 million grant.
Audience member Cairn Idun asks for Professor Mann to invent a comfortable device that can constantly monitor the heart and notify others if it stops. Mann says that he would be interested in helping out with that.
Dr. Rothblatt calls Professor Mann “the Rosa Parks of transhumanism” because he puts himself at social and legal risk everyday by wearing his device. She believes that he will become increasingly recognized as one of the greatest technological pioneers of the last century for his work. I get the feeling that he already receives that recognition from many of the attendees here.
Professor Gene Natale — The Legal Obligations of Creators of People via Ectogenesis or Chimeric Non-Human DNA – the “Model Code”
Next speaker is Gene Natale, J.D., who a human characteristic. He was the judge in the BINA48 trials that took place over the last few years at this very gathering. His first slide shows his laws for robotics: 1) robots must be designed so that the laws can effectively control a robot’s behavior, 2) knowing that it was capable of preventing harm implies that the robot has achieved self-awareness – clearly a human characteristic, thus the robot would no longer be a true robot, 3) android-robot designed to look and act human arises a need for android laws.
Legal rights of robots — if we give rights to intelligent machines, either robots or computers, we’ll also have to hold them responsible for their own errors. (Quote from Rob Freitas’ work on the ethics of intelligent machines). Wendell Wallach: Our present laws are based on a clear distinction between persons and machinery that will be increasingly challenged by more complex and intelligent agents.
Rules for the modern robot — European Robotics Network (EURON). Is there a way to ensure robotic fighter planes do not kill the innocent by accident? Is “technical error” an appropriate excuse for violating the Geneva Convention? Should sex dolls resembling children be allowed? This group (EURON) is supposed to be releasing a detailed roadmap on these questions. The Japanese Ministry of Economy, Trade, and Industry is doing the same thing. Of course, many robots will be manufactured by Japanese companies.
Moving on to laws for cyborgs — short for “cybernetic organism”, the “melding of the organic and the mechanic”. There should be a Cyborg Bill of Rights. For robots, this includes a right to privacy on the internet, to reproduce and create new robots, seek education, employment and other actualizations of self in cyberspace in any reality, to peaceably assemble, accumulate and dispose of capital, innocent until proven guilty, right to marry a human, and right to inherit property.
Gordon is the world’s first robot controlled exclusively by living brain tissue — cultured rat neurons. The MEA (multi-electrode array) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses from sensors attached to the environment. The machine can learn, preventing itself from running into walls, for instance.
Part III: artificial brains. A NSF/DOC report acknowledges large issues are coming with the enhancement and understanding of the brain. Reverse engineering of the human brain may be accomplished in the next two decades that would allow for better understanding of its functions. An artificial brain could be a tool for discovery especially if computers could closely simulate the brain.
Part IV: artificial person. Human developmental manipulation, such as a chimeric gene (an artificial, human made) gene created by linking together separate segments of natural or synthetic DNA from different sources. NSF report on Converging Technologies for Improving Human Performance: “If the cognitive scientists can think it, nano people can build it, bio people can implement it, and IT people can monitor and control it.” The projections in the report suggest that in the absence of binding restrictions, the public could be induced to accommodate itself to fabricated humans and near-humans, organisms that previously existed only in the realm of speculative fiction. One paper by the Council for Responsible Genetics argues that, “drawing a sharp line is the only way to prevent the eventual production of experimentally damaged humans and quasi-humans”.
Another type of artificial persons. Artificial brains with natural intelligence. With the advent of nano-neuro techniques, neuroscience is about to gain insight into the mechanics of all brain functions. This could allow artificial brains with natural intelligence, partners that could join us to improve the world, as well as enhanced human brains. 1) Artificial people will be very human-like given their natural intelligence, and will develop within the human environment over a long course of close relationships with humans; 2) artificial people will not be like humans any more than humans are, 3) artificial people will need social systems to develop their ethics and aesthetics. Paper: Artificial Brains and Natural Intelligence, part of the Converging Technologies for Improving Human Performance report.
Part V: Model Code for Artificial Persons. An artificial person includes any mechanical or bio-mechanical unit or being, equipped with an “artificial brain”, capable of attaining “natural human intelligence”. The next section formally defines an “artificial brain”, then “natural human intelligence”. Possible provisions for the Model Code: there small be no fusion of artificial brains into larger brains, unless specifically permitted by law. There shall be no fusion or interconnection of artificial brains into or with human brains, unless specifically permitted by law. The brain of artificial persons shall not be further programmable after initial development, except to repair or preserve such a person.
Continuing with the provisions, artificial persons shall be provided with education and social systems to develop their ethics and aesthetics. Artificial persons shall be held liable and responsible for their own acts, and subject to all laws applicable to natural persons, in accordance with such artificial persons’ human intelligence and capabilities. Artificial persons may be employed to assist natural persons in all lawful endeavors, but shall not be allowed to engage in law enforcement or combat duties, except in an advisory capacity. Artificial persons shall have the same rights as afforded natural persons, except the right to bear arms, vote, hold public office, or reproduce. Professor Vitale doesn’t think that mankind would accept artificial persons with the latter rights. This, of course, was followed by objections by the audience. The debate had to be cut short, though, for the next speaker.
Wesley M. Du Charme, Ph.D — Palling Around: The Personhood Analysis List (PAL)
Wesley, a Ph.D psychologist, admits that he likes to measure things, not define them. He thinks we ought to create a personhood analysis list (PAL) to use as a decision aid, and make it open source. Example PAL list — 1) 50% or more human DNA, 2) Turing Test passing, 3) looks like a human, 4) exhibits emotion. (Yes, this is a human-centric list, but what else would you expect from a human making it up?) 5) claims of personhood, 6) ability to speak, 7) moves through environment, 8 ) meets the ontological description of a person, 9) granting personhood is in everybody’s best interest, 10) suffers because of lack of personhood, and 11) an “other” category.
How would this work? First, you get the list together, then score the elements from 1-10 depending on the degree. There would be a cutoff score for personhood. A score of 1? 5? 10? Cross-checking: would this make a person in a coma a person? What about someone with Down syndrome, or someone who is brain damaged? Maybe, someone could become so brain-damaged that they aren’t considered a person. The PAL should match our common sense. Weak AI or expert systems should not be considered persons, so the PAL list should cross-check this.
Advantages of PAL: no single element determinant, can use a weighted scoring for different list elements as a method of compromising, and that open source nature allows utilization of the best ideas. All elements of decision open to inspection and negotiation, visibility into process makes communication with lay audience easier, information richness of your decision can be controlled through the details of each list element. For instance, breaking down “can speak” into computer generated text messages, audible speech, what percentage of the audience can understand it, etc.
Decision making elements: 1) decision makers (judge, panel of judges, expert, panel of experts, etc.), 2) list elements, 3) element scoring, 4) cutoff score, 5) application process. Should any entity be allowed to apply? Legal process: do it at the state level, write model legislation, find a sponsor, present it to committee, a state legislature vote, then pick the next state.
Rudi Hoffman, CFD — The Ethics of Cryonics: Why the Future Probably Does Need You
Outline: 1) definition of ethics, 2) some surprisingly bad examples of ethics from historically venerated sources, 3) some recent ethics thought leaders, 4) ethics of cryonics, why you are worth saving. Rudi admits that he wants to massage our emotional brains after a day of academic talks. Even though some people think they’re highly rational, they often make decisions on an emotional basis. (I somewhat disagree — I actually think it’s possible for there to be people so obsessive with rationality that their decisions and emotions generally flow from rational appraisal rather than the other way around. Some devotees of Eliezer Yudkowsky’s writings on Overcoming Bias come to mind.)
Rudi gives us a definition of ethics from the dictionary. He admits us that it’s rather uninformative: a code of behavior, esp. of a particular group, profession, or individual. Next he shows us an image of a statue of Moses. He was a source of much ethics and morality, even today. He quotes Numbers 31:13-18, where Moses is angry that during a particular war in which they were triumphant, the Israelites didn’t kill every male child and non-virgin woman. Next, he shows us an image of Jesus, who he says that millions claim to talk to every day. Jesus is unhelpful with cryonics because he doesn’t have anything to say on it. (He doesn’t show us a Jesus quote, sadly.) The idea of people burning in hell forever is in the New Testament, by the way, so it’s not all better than the Old Testament stuff.
He quotes the Koran, about cutting off people’s heads and fingers. If people change their religion, they should be killed. What about the Bible? It says that we should kill people who don’t listen to priests (Deuteronomy 17:12). “Such evil must be purged from Israel”. Death for fortune telling, death for homosexuality, death for sorceresses and witches, striking his father or mother. Also, kill the entire town if one person worships another God, as well as all the livestock. (Deuteronomy 13:13-19.) The entire town must remain a ruin forever. Also, kill all women who aren’t virgins on their wedding night. Death for blasphemy — stoned to death by the whole community of Israel. Death for people who work on Sunday. Many of the people in the audience are laughing loudly, and Rudi says, “these statements deserve to be laughed at in the public square”. I again am baffled — how can people be Christian? How can they say they follow the Bible? They’re either hypocrites (by not following the Bible) or downright evil. If the Bible were published today instead of thousands of years ago, it would be considered a work of madness.
Modern ethicists: now, Leon Kass, chair of the President’s Council on Bioethics. “The human taste for immortality, for the imperishable and the eternal, is not a taste that the biomedical conquest of death cold satisfy. We would still be incomplete; we would still lack wisdom; we would still lack God’s presence and redemption.” Another quote, “One could look over the past century and ask oneself, has the increased longevity been good, bad or indifferent.” Since Leon is over the natural human age of living, perhaps he should terminate himself? Actually, I’d personally prefer him to freeze himself and join us in the future (if the species survives).
Another ethicist: Peter Singer. “My work is based on the assumption that clarity and consistency in our moral thinking is likely in the long run, to lead us to hold better views on ethical issues”. Peter Singer supports the Great Ape Personhood Project and encourages people to be vegetarians. Yeah, Peter Singer is cool (and one of the only reasonable ethicists that Rudi mentions in his presentation). I would’ve also mentioned David Pearce and Max More, though of course these are not as famous as the others mentioned.
How about Pope Benedict? “You offend God not only by stealing, blaspheming or coveting your neighbor’s wife, by also by ruining the environment, carrying out morally debatable scientific experiments, or allowing genetic manipulations which alter DNA or compromise embryos.” Rudi points out that a blastocyst has zero neurons, but a housefly has 100,000, so maybe we should feel worse for killing a housefly than a blastocyst.
Next Rudi references Ben Franklin, a quote from a 1773 letter of his to Jacques Dubourg when he mentions the idea of cryonics. Next, he shows us an image of Francis Fukuyama saying “Yes, absolutely”. That’s the answer, what is the question? “Does the government have the right to determine that citizens must die?” Next, Bill McKibben: “Theses are the most anti-choice technologies anyone has ever thought of.” (Referring to biotechnology, nanotech, and robotics.) How about Aubrey de Grey? Probably the only person on the planet doing enough to extend human lifespan. He references his talk, “Is it safe for a biologist to support cryonics publicly?”
Is it moral to want to live? What about overpopulation? Do you add more value to the world than you take? What if Einstein, Robert Ingersoll, or Thomas Edison were still alive and productive? If you are a human being that adds more value to the world than you take away, then you owe it to us to stay alive.
Is choosing cryonics ethical? What about alternate use of funds? Through life insurance, money to fund is created at time of need, does not reduce available assets. The cost is similar to a cup of Starbucks coffee a day.
Conclusion: 1) you cannot base your ethical decisions on what people allegedly said when technology was non-existent, 2) for authoritarian religionists to claim the morala high ground, they must acaknowledge that our actual ethics comes from experience, not scripture, 3) life is good, and a pre-requisite for doing good. It follows that your life is probably worth the affordable gamble of cryonics.
Questions and comments section. Martine worries that there will be robots will the Bible programmed as their ethical guideline. Rudi expresses worry that so many people think that ethics is already solved by the Bible. Modern ethics needs to be far more nuanced. Cairn says she agrees with 99% but disagrees with one thing: that anyone should be allowed to kill themselves if they want. No matter how much benefit they provide. Mike Perry concurs on that.
Martine Rothblatt — Make Up Your Mind: the Legal Identity of Transbemans with More Than One Presence
Whatsa Transbeman? A human without DNA, any being that wants human rights, a beman who transitioned from human. Next Martine shows us a funny comic with a car with a human head and hands: “So I thought, hey what the heck, it’s my life, who cares what other people think. I’ve always known I was a car trapped in a man’s body, so I got the operation, and I’ve never been happier”. More seriously, something like ASIMO with a conscience.
Human is to transbeman as sex is to transgender. It’s about being a being and being proud of it. Not getting hung up and what your substrate is. (Image of a drop of water hitting a pool.) So, who are we when we’re a spreading pattern and not a fixed body. A modest legal proposal: copies of a mind are the same mind until proven differently. Therefore, one mind, one vote, one continuous being across time and space. How to prove differently? The transgender “Real Life Test”. This is when you spend time once or twice a week with two psychologists and persuade them that their desire is sincere. This prevents doctors or surgeons from being sued after a sex-change operation, even if the patient later changes their mind. So, you might need to persuade two cyber-psychologists. Then you could obtain a judicial order of cyberbirth. Or, a multi-year immigration process from Res Nullius (an object or person with no legal status) to Jus Civile (a legal citizen).
Critiques? 1) isn’t this process slow? Yes, it is, but so is growing up and naturalization. Furthermore, cultivation improves appreciation. Won’t some “mindclones” pass the “Real Life Test”? Yes, but it doesn’t really matter. Many incompetent people, for instance those who just vote the way their preacher says, have citizenship. Cyberpsychology certification standards will diminish the problem (eliminate false positives), and naturalization-from-cyberspace procedures will diminish the problem.
“Each of us is a bundle of fragments of other people’s souls, simply put together in a new way.” — Doug Hofstadter, I Am a Strange Loop
Since we let humans birth new flesh minds, then why not also let them birth non-flesh minds? A mind is a being, a soul, a person, a transbeman. Every mind is a mixture of other minds, not wholly independent, not of purely free will, and in constant synchronization with friends and family.
So on this International Human Rights Day: instead of saying, “I dream of a day when my four children will be judged not by the color of their skin but by the content of their character.” Martine instead says, “I dream of a day when my four children will be judged not by the absence of their skin but by the content of their character.”
Take home message: human rights, such as citizenship, should flow to those who value them, regardless of their physiology, or lack thereof.