Why We Need Friendly AI

An article I often point people to is “Why We Need Friendly AI”, an older (2004) article by Eliezer Yudkowsky on the challenge of Friendly AI:

There are certain important things that evolution created. We don’t know that evolution reliably creates these things, but we know that it happened at least once. A sense of fun, the love of beauty, taking joy in helping others, the ability to be swayed by moral argument, the wish to be better people. Call these things humaneness, the parts of ourselves that we treasure – our ideals, our inclinations to alleviate suffering. If human is what we are, then humane is what we wish we were. Tribalism and hatred, prejudice and revenge, these things are also part of human nature. They are not humane, but they are human. They are a part of me; not by my choice, but by evolution’s design, and the heritage of three and half billion years of lethal combat. Nature, bloody in tooth and claw, inscribed each base of my …

Read More

Inevitability of Plate Tectonics on Super-Earths

I thought this was interesting.

The recent discovery of super-Earths (masses less or equal to 10 earth-masses) has initiated a discussion about conditions for habitable worlds. Among these is the mode of convection, which influences a planet’s thermal evolution and surface conditions. On Earth, plate tectonics has been proposed as a necessary condition for life. Here we show, that super-Earths will also have plate tectonics. We demonstrate that as planetary mass increases, the shear stress available to overcome resistance to plate motion increases while the plate thickness decreases, thereby enhancing plate weakness. These effects contribute favorably to the subduction of the lithosphere, an essential component of plate tectonics. Moreover, uncertainties in achieving plate tectonics in the one earth-mass regime disappear as mass increases: super-Earths, even if dry, will exhibit plate tectonic behaviour.

Can’t wait until we build a hypertelescope to see if the Super-Earths out there are rock or gas.

Read More

Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, “Complex Value Systems are Required to Realize Valuable Futures”. This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome”, despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least …

Read More

Global Catastrophic Risk Research Page

From Seth Baum:

Global catastrophic risks (GCR) are risks of events that could significantly harm or even destroy human civilization at the global scale. GCR is related to the concept of existential risk, which is risk of events that would cause humanity to no longer exist. (Note that Nick Bostrom, who coined the term existential risk, defines it in a slightly different way.) Prominent GCRs include climate change, nuclear warfare, pandemics, and artificial general intelligence. Due to the breadth of the GCRs themselves and the issues that GCRs raise, the study of GCR is quite interdisciplinary.

According to a range of ethical views, including my views, reducing GCR should be our top priority as individuals and as a society. In short, if a global catastrophe occurs, then not much else matters, since so much of what we might care about (such as human wellbeing, the wellbeing of non-human animals, or the flourishing of ecosystems) would be largely or entirely wiped out by the catastrophe. The details about prioritizing GCR are a bit more …

Read More

Robert Ettinger has been Cryopreserved

Robert Ettinger, a hero among many transhumanists for fathering the cryonics movement, has been cryopreserved at age 92 in Clinton Township, Michigan. He died on Saturday, July 23.

The Cryonics Institute press release is here. There are a few obituaries online, including one from the Telegraph. Chronopause, a cryonics blog, reviews the history of Ettinger and cryonics.

Ettinger’s 1962 book The Prospect of Immortality and 1972 book Man into Superman inspire many transhumanists to think beyond the “inevitability” of death.

Ben Best was quoted by KurzweilAI.net on the suspension:

“Robert Ettinger deanimated [Saturday] at around 4 p.m. Eastern Time,” said Ben Best, president of the Cryonics Institute. “He was under hospice care and had an ice bath sitting by his bedside. His pronouncement and initiation of cooling was very rapid. The perfusion went well and he is now in the cooling box. Much …

Read More

Singularity Institute Announces Research Associates Program

From SIAI blog:

The Singularity Institute is proud to announce the expansion of our research efforts with our new Research Associates program!

Research associates are chosen for their excellent thinking ability and their passion for our core mission. Research associates are not salaried staff, but we encourage their Friendly AI-related research outputs by, for example, covering their travel costs for conferences at which they present academic work relevant to our mission.

Our first three research associates are:

Daniel Dewey, an AI researcher, holds a B.S. in computer science from Carnegie Mellon University. He is presenting his paper ‘Learning What to Value‘ at the AGI-11 conference this August.

Vladimir Nesov, a decision theory researcher, holds an M.S. in applied mathematics and physics from Moscow Institute of Physics and Technology. He helped Wei …

Read More

Most Popular Posts This Year So Far

1. Amusing Ourselves to Death 2. Ten Futuristic Materials 3. Top 10 Transhumanist Technologies 4. Brain-Computer Interfaces for Manipulating Dreams 5. The Benefits of a Successful Singularity 6. Six Places to Nuke for Multiplier Effects 7. Response to Charles Stross’ “Three arguments against the Singularity” 8. How Can I Incorporate Transhumanism into my Daily Life? 9. A Nuclear Reactor in Every Home 10. Wish 11. Terraformed Mars 12. Why “Transhumanism” is Unnecessary 13. Hard Takeoff Sources 14. X-Seed 4000 15. Kurzweil’s 2009 Predictions 16. The Illusion of Control in an Intelligence Amplification Singularity 17. Collaborative Map of Transhumanists Worldwide 18. Continuing Discussion with Mr. Knapp at Forbes 19. Paul Graham’s Disagreement Hierarchy 20. The Final Weapon

Read More

The Singularity is Far: A Neuroscientist’s View

I haven’t read this, I’m just posting it because other people are talking about it.

Ray Kurzweil, the prominent inventor and futurist, can’t wait to get nanobots into his brain. In his view, these devices will be equipped with a variety of sensors and stimulators and will communicate wirelessly with computers outside of the body. In addition to providing unprecedented insight into brain function at the cellular level, brain-penetrating nanobots would provide the ultimate virtual reality experience.


Read More

The Last Post Was an Experiment

+1 for everyone who saw through my lie.

I thought it would be interesting to say stuff not aligned with what I believe to see the reaction.

The original prompt is that I was sort of wondering why no one was contributing to our Humanity+ matching challenge grant.

Maybe because many futurist-oriented people don’t think transhumanism is very important.

They’re wrong. Without a movement, the techno-savvy and existential risk mitigators are just a bunch of unconnected chumps, or in isolated little cells of 4-5 people. With a movement, hundreds or even thousands of people can provide many thousands of dollars worth of mutual value in “consulting” and work cooperation to one another on a regular basis, which gives us the power to spread our ideas and stand up to competing movements, like Born Again bioconservatism, which would have us all die by age 110.

I believe the “Groucho Marxes” — who “won’t join any club that will have them” are sidelining themselves from history. Organized transhumanism is very important. …

Read More

Why “Transhumanism” is Unnecessary

Who needs “transhumanism”? Millions of dollars are going into fields such as brain-computer interfacing, robotics, AI, and regenerative medicine without the influence of “transhumanists”. Wouldn’t transhumanism be better off if we relinquished the odd name and just marketed ourselves as “normal”?

Wild transhumanist ideas such as cryonics, molecular nanotechnology, hard takeoff, Jupiter Brains, and the like, distract our audience from the incremental transhumanist advances occurring on an everyday basis in labs at universities around the world. Brain implants exist, gene sequencing exists, regenerative medicine exists — why is this any different than normal science and medicine?

Motivations such as the desire to raise one’s father from the dead are clearly examples of theological thinking. Instead of embracing theology, we need to face the nitty gritty of the world here and now, with all of its blemishes and problems.

Instead of working towards blue-sky, neo-apocalyptic discontinuous advances, we need to preserve democracy by promoting incremental advances to ensure that every citizen has a voice in every important societal change, and the ability to democratically reject those changes if desired.

Read More

$18.5 Million for Brain-Computer Interfacing

Another university is opening up a BCI lab, University of Washington. It makes sense because it’s near the Allen Institute for Brain Science, among other reasons. Did I mention that Christof Koch, the new Chief Science Officer of the Allen Institute, will be speaking at Singularity Summit?

Here’s an excerpt of the news release:

The National Science Foundation today announced an $18.5 million grant to establish an Engineering Research Center for Sensorimotor Neural Engineering based at the University of Washington.

“The center will work on robotic devices that interact with, assist and understand the nervous system,” said director Yoky Matsuoka, a UW associate professor of computer science and engineering. “It will combine advances in robotics, neuroscience, electromechanical devices and computer science to restore or augment the body’s ability for sensation and movement.”

The text is pretty generic boilerplate, it’s just the action that is important. We will likely have to wait a year or more before any interesting breakthroughs from this lab hit the news.

Read More