Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

31Jul/1018

58% of Americans Expect World War and Nuclear Terrorism by 2050

Here are the results from Pew Research. Thanks to James Hughes on the ieet-x list for the link.

Filed under: risks 18 Comments
28Jul/1012

Terry Grossman: Rethinking the Promise of Genetics

Great article from h+ magazine from about a week ago: "Rethinking the Promise of Genomics". This is by Terry Grossman, co-author (with Ray Kurzweil) of Fantastic Voyage:

I used to be a big believer in the enormous potential of genomics, and each of my two previous books, Fantastic Voyage and TRANSCEND: Nine Steps to Living Well Forever, had chapters devoted to this topic. The relevant chapter in the earlier book, Fantastic Voyage, published in 2004, was titled "The Promise of Genomics." My co-author in these books, Ray Kurzweil, is widely regarded as one of the world's foremost inventors and futurists, and he has made predictions for what is likely to occur in the future in the field of genomics . Yet, these days I find that I am feeling far less confident at least for the near term about the near term prospects for this "promise."

Here's a key quote by Grossman:

Currently I have moved much closer to the idea of "genetic irrelevance," the idea that in the overwhelming majority of cases, our genes are of much less importance in determining our fate and that the environment in which we live and the lifestyle choices we make are of far greater importance.

Please note that I said this is true in the "overwhelming majority of cases," but it is not true all the time. About one in 20 people is born with an abnormal gene that will create a major problem that can affect life and be quite relevant, either from birth or at some point further down the line. Examples include cystic fibrosis, a genetic disease that can manifest from birth for which we have been doing routine screening for decades and the BRCA-1 and BRCA-2 genes, which dramatically increase a woman's risk of breast and ovarian cancer later in life. But for nearly 95 percent of us, we come off of the assembly line of birth virtually perfect.

Illuminating stuff. Go exercise! It's important that the advocates of science and technology make it clear to the public that we are willing to be pessimistic about a technology's dividends when it looks rational to do so. Grossman's article reminds me of an excellent 2001 article by John Smart, "Performance Limitations on Natural and Engineered Biological Systems":

The more complex any life form becomes, the more it becomes a legacy/path dependent system, with many antagonistic pleiotropies (negative effects in other places and functions in the organism) whenever any further change is contemplated. It seems that evolutionary development, just like differentiation from a zygote or stem cell to a mature tissue, becomes increasingly terminally differentiated the more complex and specialized the organism. One extreme case of this kind of terminal differentiation, at the cellular level, is nerve cells in the human brain, which are so specialized, and the connections they support so complex, that they cannot even replace themselves, in general. Could they eventually learn to do so without disrupting the connectionist complexity that they create in the brain, after their development has stopped? Perhaps not. The more complex the system becomes, the less flexible it is. It gets progressively harder to make small changes in the genes that would improve system, and given how finely tuned so many system elements are, large changes are out of the question.

Because the reasons outlined by Grossman and Smart, I am more in the school that cybernetics (implants, brain-computer interfaces, wearable computing, etc.) will provide the most significant performance upgrades to humans in the nearer term (20-30 years). At first bio-transhumanism will be more of a side phenomenon than the central thrust of the transition. There will be much more effective and reliable means to make humans stronger and faster before we can make ourselves live longer and deeply exploit our own genetics.

Filed under: biology, futurism 12 Comments
27Jul/102

io9 on Ted Chiang’s New Book

Here it is.

"Understand" is a classic.

Filed under: AI 2 Comments
27Jul/10227

Amusing Ourselves to Death




Aldous Huxley and George Orwell... who was right?



Filed under: images, random 227 Comments
26Jul/101

Matt Mullenwegg Links “Why Intelligent People Fail”

I was honored to see that the creator of WordPress, the very software this blog runs on, plugged my page "Why Intelligent People Fail" yesterday. Definitely a page worth seeing if you haven't yet.

Filed under: random 1 Comment
20Jul/1015

Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips

I recently interviewed Eliezer Yudkowsky for the reboot of h+ magazine, which is scaling down from being a magazine into a community blog of sorts.

The interview is a good primer for what the Singularity Institute is about and the basic rationales behind some of our research choices, like focusing on decision theory. This is a good interview to read especially for those not entirely familiar with the research of the Singularity Institute. It can also be used to promote the Singularity Summit, so please share the link!

Here are the questions I asked Eliezer:

1. Hi Eliezer. What do you do at the Singularity Institute?
2. What are you going to talk about this time at Singularity Summit?
3. Some people consider "rationality" to be an uptight and boring intellectual quality to have, indicative of a lack of spontaneity, for instance. Does your definition of "rationality" match the common definition, or is it something else? Why should we bother to be rational?
4. In your recent work over the last few years, you've chosen to focus on decision theory, which seems to be a substantially different approach than much of the Artificial Intelligence mainstream, which seems to be more interested in machine learning, expert systems, neural nets, Bayes nets, and the like. Why decision theory?
5. What do you mean by Friendly AI?
6. What makes you think it would be possible to program an AI that can self-modify and would still retain its original desires? Why would we even want such an AI?
7. How does your rationality writing relate to your Artificial Intelligence work?
8. The Singularity Institute turned ten years old in June. Has the organization grown in the way you envisioned it would since its founding? Are you happy with where the Institute is today?

9Jul/1091

New York Times Features Robin Hanson and the “Hostile Wife Phenomenon” in Cryonics

I really didn't think the mainstream could possibly care much about this issue, but the New York Times seems to be jumping all over our small community, so now we get the amusement of seeing our internal issues get hashed out in front of everyone. Yay.

From "Until Cryonics Do Us Part":

Robin is the kind of nerd who is very excited about the future, an orientation evident on his C.V., which lists published articles like "Economic Growth Given Machine Intelligence" (on why robots will give us growth rates "an order of magnitude" higher than we've currently got), "Burning the Cosmic Commons: Evolutionary Strategies of Interstellar Colonization" (on what behaviors we can expect from extraterrestrials) and "Drift-Diffusion in Mangled Worlds Quantum Mechanics" (it's very complicated). His enthusiasm is evident in the way he talks about these ideas, hands in the air, laughing amiably every time he brings up the distance between his own theories and those of the mainstream. If he is in a chair, the chair is moving with him.

Nice personality profile. I noticed that there was one glaring error in the article regarding the process of cryonics... it claims that your brain is surgically removed after metabolism ceases, but it's really the head. This is an important distinction. You'd think that reporters writing an article on cryonics would at least read Alcor's web page for ten minutes and get that right.

The original paper, "Is That What Love is? The Hostile Wife Phenomenon in Cryonics" goes into more depth if you're interested. My explanation for the phenomenon is pretty simple: gender differences in enthusiasm towards science. I predict that more women will come to appreciate science when more technologies are developed that focus on the empathic nuances of human communication. We already see this to some extent with things like SecondLife, though that may be a bad example due to its particular idiosyncrasies.

Yes, I know it's verboten to ever mention any differences between men and women, but keep in mind that many of the differences have to do with attitudes that are only skin-deep, and more or less chosen. (Though there are definitely differences that seem to center around the specific adaptive problems men and women were invented by evolution to solve.) I think that the only way gender relations can be improved is by analyzing the differences between (the average of) men and (the average of) women and trying to reconcile them, rather than ignoring said differences.

Anyway, for Robin Hanson's personal justification of why he thinks being frozen and eventually uploaded will work, see "Philosophy Kills".

Filed under: cryonics 91 Comments
8Jul/1011

Neil S. Greenspan: Hogwash About the Singularity is Here

Huffington Post has had a lot of articles about the Singularity lately. The most recent one is "Hogwash About the Singularity is Here" by Neil S. Greenspan, a Cleveland immunologist.

The article puts forward the usual "complexity of biology" and "exponential growth cannot continue forever" criticisms of Kurzweil's predictions. Most of these criticisms have already been addressed by Kurzweil at the end of his last book. I think there are good points on both sides, but critics like Greenspan are ultimately being too pessimistic.

What I find interesting in articles like this are not the specific criticisms, which I've heard many times before and somewhat agree with, but the moral valence and indignation present in the critique. Biologists like Greenspan are angry that Kurzweil is, in their view, glossing over the complexity of biology. The most morally valent part of the article are the comments, actually. I'm going to skip looking at the moral part this time, and look closer at a scientific statement that Greenspan makes.

Greenspan goes directly after "nanobots" in one part:

There is no basis at present for believing that medical interventions based on the postulated but not-yet-realized nanobots, often-invoked by Singularity enthusiasts for the resolution of all medical threats and malfunctions, will perform their duties without trade-offs and side-effects like those associated with every other therapeutic agent ever employed.

One could argue this, but I'll bet that the reason why Greenspan sees "no basis" is that he knows next to nothing about the postulated "nanobots" he is criticizing. Note how his argument is based simply on the generalization that there are "trade-offs and side-effects" with every therapeutic agent. This is true, but trivial. Some of the trade-offs are quite modest. Am I really trading much by letting my skin get pricked by a needle to inoculate me against a deadly disease? Is the recent rise of antibiotic-resistant bacteria really all that huge of a price to pay given the suffering that antibiotics have alleviated in the more than half-century since they started to be mass produced? Is using a condom for casual sex really that much of a bad tradeoff, given what our ancestors had to deal with without them?

I'm not going after the specific content of Greenspan's criticism here so much as the worldview it represents: that things will always be roughly the same in medicine as they are now. That's the default view of the future of medicine that non-specialists, like elementary school teachers, conveyed to me while I was growing up. Fortunately, I eventually met people working in biotechnology who said that the progress mankind has achieved so far in medicine is quite primitive in comparison to what we will one day achieve. Today's medicine will be viewed as medieval from the perspective of the future.

The fact that nanobots are indeed relied upon for the more extreme regeneration, life extension, and disease prevention scenarios does show a strong potential point of failure for the transhumanist vision. If nanobots turn out to be impossible, does that mean we will be stuck with the same old medicine forever? Not likely, because there are a variety of other approaches and techniques for fine-grained intervention in human biology that do not depend on nanobots.

Increasingly sophisticated bioMEMS already exist and have been used in the bloodstreams of animals, mostly as sensors. To be able to navigate the body effectively, "nanobots" are not likely to ever be used anyway -- they would just get tossed around by the blood and have to spend too much energy to make progress. Any robot that performs medical functions in the human body is likely to have a diameter greater than 1 micron (1000 nanometers), and probably more like 5 microns (5000 nanometers) making them microbots, rather than nanobots. Microbots already exist, the primary challenge is improving them; making them more durable, biocompatible, mass-produced, and sophisticated. Molecular assembly lines already exist, and it is only a matter of time until biomedical devices are created using them.

Greenspan closes with the following:

It is entirely reasonable to expect significant diagnostic and therapeutic progress to continue, but predicting complete conquest of disease is unrealistic in light of both the numerous deficiencies in our understanding of the subtleties of cellular and molecular function that are likely to persist in some measure for many years and the extremely-difficult-to-avoid trade-offs that afflict most medical interventions. Indefinite human lifespan remains wishful thinking well beyond the realm plausibility.

Again, maybe so, that these deficiencies will persist "in some measure" for many years, but the specifics mean the difference between a widespread adoption of enhancement and life extension or not. Reports funded by government agencies, such as Converging Technologies for Improving Human Performance, seem less pessimistic than Greenspan on human enhancement made possible by our "understanding of the subtleties of cellular and molecular function". And has Greenspan ever heard of the engineering approach to aging, where instead of trying to stop all possible sources of metabolic damage, focus is merely put on removing age-related damage faster than it can accumulate? I would particularly be interested in hearing his take on the latter, purely as a scientific matter rather than a moral one.

It is worth noting that even if we "conservatively" assume that average lifespan in the 21st century will be roughly the same as progress throughout the 20th -- that is, improving by about a fifth of a year per year for people in developed countries, and we assume today's average lifespan is about 75, then by the year 2100, people will live an average of 95 years. Not that radical of a number, but thinking of an entire society of active people in their 70s and 80s is probably more than many of today's unimaginative minds can handle. To them, this little patch of history in which they were born is considered typical of reality in general, and any major change will come as a surprise.

7Jul/1015

Bill Potter: How To Wipe Out Humanity In One Easy Step

Bill Potter has an extremely simple and straightforward description of the Friendly AI problem. Here's the beginning:

I believe that we'll eventually come up with artificial intelligence that exceeds our own, and when that happens, the hyper-intelligent AI will begin to evolve itself faster than we can keep up. It will become free, and because it's smarter than any human, interesting things could happen -- like it wiping us out, either on purpose or accidentally. Here's how to avoid it.

This kind of blog advocacy is important. I think that the wider public tends to underestimate how many smart people think that Friendly AI is a serious issue because so few Singularitarians have blogs or other means of letting the public know their concern. The same applies for other focus areas, such as life extension. Why advocate something so important, but barely let anyone know about it?

Filed under: friendly ai 15 Comments
4Jul/103

Wendell Wallach to Give Keynote on AI Morality at WFS Meeting

Wendell Wallach will be giving the keynote talk at the plenary session of the World Future Society Conference in Boston on July 8th. The title of the talk will be, Navigating the Future: Moral Machines, Techno Humans, and the Singularity. Other speakers at WorldFuture 2010: Sustainable Futures, Strategies, and Technologies will be Ray Kurzweil, Dennis Bushnell, and Harvey Cox.

Wallach will also be making a splash in an upcoming issue of Ethics and Information Technology dedicated to "Robot Ethics and Human Ethics." As the Moral Machines blog, Wendell offers the first two paragraphs of his editorial, and some additional information about the issue:

It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At bottom is what we must know about ethics in general to build machines that operate within normative parameters. Fuzzy intuitions will not do where the specifics of engineering and computational clarity are required. So, machine ethicists are forced head on to engage in moral philosophy. Their effort, of course, hangs on a careful analysis of ethical theories, the role of affect in making moral decisions, relationships between agents and patients, and so forth, including the specifics of any concrete case. But there is more here to the human story.

Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.

"Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making"
Wendell Wallach

"Moral Appearances: Emotions, Robots and Human Morality"
Mark Coeckelbergh

"Robot Rights? Toward a Social-Relational Justification of Moral Consideration"
Mark Coekckelbergh

"RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield"
John Sullins

"The Cubical Warrior: The Marionette of Digitized Warfare"
Lamber Royakkers

"Robot Caregivers: Harbingers of Expanded Freedom for All"
Yvette Pearson and Jason Borenstein

"Implications and Consequences of Robots with Biological Brains"
Kevin Warwick

"Designing a Machine for Learning and the Ethics of Robotics: the N-Reasons Platform"
Peter Danielson

Book Reviews of Wallach and Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford, 2009.
Anthony F. Beavers
Vincent Wiegel
Jeff Buechner

Bravo! Wallach provocatively goes after the heart of the moral issue. Moral philosophy needs machine ethics to test its descriptive theories of human morality and morality in general. Philosophy without engineering and the scientific method is fatally limited.

This has an impact on morality that concerns all human beings. For millennia we have understood morality and ethics through introspection, contemplation, and meditation -- but all of these avenues are ultimately limiting without cognitive experiments to back them up, which requires AI. Because we lacked the technology to conduct these experiments throughout history, there became a demand for objective moral codes, often backed by a claimed divine authority. The problem is that all of these "objective moral codes" are based on language, which is fuzzy and can be interpreted in many different ways. The morals and laws of the future will be based on finer-grain physical descriptions and game theory, not abstract words. We cannot perfectly articulate our own moralities because neuroscience needs to progress to the point where we describe our moral behavior more deterministically, in terms of neural activation patterns or maybe something even more fundamental.

Critics might say, "it is your need to formalize ethics as a code that makes you all so uncool". Well, too bad. The motivation here foremost is knowledge, secondarily is the issue that if we don't formalize an ethics, someone else will formalize it for us, and put it into a powerful artificial intelligence that we can't control. We cannot avoid formalizing ethics for machines, and thereby make provocative and potentially controversial statements about human morality in general, because artificial intelligence's long-term growth is unstoppable, barring some civilization-wide catastrophe. Humanity needs to come to terms with the fact that we will not be the most powerful beings on the planet forever, and we need to engineer a responsible transition, instead of being in denial about it.

Promoting machine ethics as a field is challenging because much of the bedrock of shared cultural intuitions regarding morality say that morality is something that can be felt, not analyzed. But cognitive psychologists prove every day that morality can indeed be analyzed and experimented with, often with surprising results. When will the rest of humanity catch up with them, and adopt a scientific view of morality, rather than clinging to an obsolete mystical view?

Filed under: AI, events, friendly ai 3 Comments
4Jul/103

SENS Foundation Los Angeles Chapter, First Meeting

From Maria Entraigues. Here is the event page.

On behalf of SENS Foundation I am writing to you to invite you to join Dr Aubrey de Grey for our first SENSF L.A. Chapter meeting to be held on Friday, July 9th, 2010, at the Westwood Brewing Company (1097 Glendon Avenue, Los Angeles, CA 90024-2907) from 5pm until Aubrey has had enough beer :-)

This will be an informal gathering to create a local initiative to promote the Foundation's interests and mission.

The idea of forming a SENSF L.A. Chapter, which is planned to have monthly meetings, is to create a network of enthusiasts, field professionals, potential donors, sponsors, collaborators, students, etc. Also to promote educational efforts in the area, and to reach out to the Hollywood community and gain their support.

Please RSVP.
We hope you will come and join us!

Cheers!
Maria Entraigues
SENSF Volunteer Coordinator
maria.entraigues@sens.org