Great article from h+ magazine from about a week ago: “Rethinking the Promise of Genomics”. This is by Terry Grossman, co-author (with Ray Kurzweil) of Fantastic Voyage:
I used to be a big believer in the enormous potential of genomics, and each of my two previous books, Fantastic Voyage and TRANSCEND: Nine Steps to Living Well Forever, had chapters devoted to this topic. The relevant chapter in the earlier book, Fantastic Voyage, published in 2004, was titled “The Promise of Genomics.” My co-author in these books, Ray Kurzweil, is widely regarded as one of the world’s foremost inventors and futurists, and he has made predictions for what is likely to occur in the future in the field of genomics . Yet, these days I find that I am feeling far less confident at least for the near term about the near term prospects for this “promise.”
Here’s a key quote by Grossman:
Currently I have moved much closer to the idea of “genetic irrelevance,” the idea that in the overwhelming …
I was honored to see that the creator of WordPress, the very software this blog runs on, plugged my page “Why Intelligent People Fail” yesterday. Definitely a page worth seeing if you haven’t yet.
Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips
The interview is a good primer for what the Singularity Institute is about and the basic rationales behind some of our research choices, like focusing on decision theory. This is a good interview to read especially for those not entirely familiar with the research of the Singularity Institute. It can also be used to promote the Singularity Summit, so please share the link!
Here are the questions I asked Eliezer:
1. Hi Eliezer. What do you do at the Singularity Institute? 2. What are you going to talk about this time at Singularity Summit? 3. Some people consider “rationality” to be an uptight and boring intellectual quality to have, indicative of a lack of spontaneity, for instance. Does your definition of “rationality” match the common definition, or is it something else? Why should we bother to be rational? 4. In your recent …
I really didn’t think the mainstream could possibly care much about this issue, but the New York Times seems to be jumping all over our small community, so now we get the amusement of seeing our internal issues get hashed out in front of everyone. Yay.
Robin is the kind of nerd who is very excited about the future, an orientation evident on his C.V., which lists published articles like “Economic Growth Given Machine Intelligence” (on why robots will give us growth rates “an order of magnitude” higher than we’ve currently got), “Burning the Cosmic Commons: Evolutionary Strategies of Interstellar Colonization” (on what behaviors we can expect from extraterrestrials) and “Drift-Diffusion in Mangled Worlds Quantum Mechanics” (it’s very complicated). His enthusiasm is evident in the way he talks about these ideas, hands in the air, laughing amiably every time he brings up the distance between his own theories and those of the mainstream. If he is in a chair, the chair is moving with …
Huffington Post has had a lot of articles about the Singularity lately. The most recent one is “Hogwash About the Singularity is Here” by Neil S. Greenspan, a Cleveland immunologist.
The article puts forward the usual “complexity of biology” and “exponential growth cannot continue forever” criticisms of Kurzweil’s predictions. Most of these criticisms have already been addressed by Kurzweil at the end of his last book. I think there are good points on both sides, but critics like Greenspan are ultimately being too pessimistic.
What I find interesting in articles like this are not the specific criticisms, which I’ve heard many times before and somewhat agree with, but the moral valence and indignation present in the critique. Biologists like Greenspan are angry that Kurzweil is, in their view, glossing over the complexity of biology. The most morally valent part of the article are the comments, actually. I’m going to skip looking at the moral part this time, and look closer at a scientific statement that Greenspan …
Bill Potter has an extremely simple and straightforward description of the Friendly AI problem. Here’s the beginning:
I believe that we’ll eventually come up with artificial intelligence that exceeds our own, and when that happens, the hyper-intelligent AI will begin to evolve itself faster than we can keep up. It will become free, and because it’s smarter than any human, interesting things could happen — like it wiping us out, either on purpose or accidentally. Here’s how to avoid it.
This kind of blog advocacy is important. I think that the wider public tends to underestimate how many smart people think that Friendly AI is a serious issue because so few Singularitarians have blogs or other means of letting the public know their concern. The same applies for other focus areas, such as life extension. Why advocate something so important, but barely let anyone know about it?
Wendell Wallach will be giving the keynote talk at the plenary session of the World Future Society Conference in Boston on July 8th. The title of the talk will be, Navigating the Future: Moral Machines, Techno Humans, and the Singularity. Other speakers at WorldFuture 2010: Sustainable Futures, Strategies, and Technologies will be Ray Kurzweil, Dennis Bushnell, and Harvey Cox.
Wallach will also be making a splash in an upcoming issue of Ethics and Information Technology dedicated to “Robot Ethics and Human Ethics.” As the Moral Machines blog, Wendell offers the first two paragraphs of his editorial, and some additional information about the issue:
It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At …
From Maria Entraigues. Here is the event page.
On behalf of SENS Foundation I am writing to you to invite you to join Dr Aubrey de Grey for our first SENSF L.A. Chapter meeting to be held on Friday, July 9th, 2010, at the Westwood Brewing Company (1097 Glendon Avenue, Los Angeles, CA 90024-2907) from 5pm until Aubrey has had enough beer :-)
This will be an informal gathering to create a local initiative to promote the Foundation’s interests and mission.
The idea of forming a SENSF L.A. Chapter, which is planned to have monthly meetings, is to create a network of enthusiasts, field professionals, potential donors, sponsors, collaborators, students, etc. Also to promote educational efforts in the area, and to reach out to the Hollywood community and gain their support.
Please RSVP. We hope you will come and join us!
Cheers! Maria Entraigues SENSF Volunteer Coordinator email@example.com