Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

5Aug/1138

Eliezer Yudkowsky at the Winter Intelligence Conference at Oxford: “Friendly AI: Why It’s Not That Simple”

Winter Intelligence Conference 2011 - Eliezer Yudkowsky from Future of Humanity Institute on Vimeo.

25Jul/114

Global Catastrophic Risk Research Page

From Seth Baum:

Global catastrophic risks (GCR) are risks of events that could significantly harm or even destroy human civilization at the global scale. GCR is related to the concept of existential risk, which is risk of events that would cause humanity to no longer exist. (Note that Nick Bostrom, who coined the term existential risk, defines it in a slightly different way.) Prominent GCRs include climate change, nuclear warfare, pandemics, and artificial general intelligence. Due to the breadth of the GCRs themselves and the issues that GCRs raise, the study of GCR is quite interdisciplinary.

According to a range of ethical views, including my views, reducing GCR should be our top priority as individuals and as a society. In short, if a global catastrophe occurs, then not much else matters, since so much of what we might care about (such as human wellbeing, the wellbeing of non-human animals, or the flourishing of ecosystems) would be largely or entirely wiped out by the catastrophe. The details about prioritizing GCR are a bit more complicated than this (and are part of ongoing research), but GCR does nonetheless remain a (or the) top priority from a range of views.

Seth Baum is one of the only academics working on existential risks. Last December I attended the Society for Risk Analysis annual meeting at his invitation, and gave a talk on molecular nanotechnology risk. I also summarized what the Singularity Institute does.

Seth Baum has some attention and interest from a few leading figures in the risk analysis community, but he needs to get more momentum to have a larger impact. If you are an academic you should consider partnering with him. The UK has a nicely established existential risk research group in the form of the Future of Humanity Institute, but the US lacks one. We have SIAI and the Lifeboat Foundation, but SIAI is focused on AGI, and the Lifeboat Foundation doesn't have any research staff.

Filed under: risks 4 Comments
25Jul/1156

Robert Ettinger has been Cryopreserved

Robert Ettinger, a hero among many transhumanists for fathering the cryonics movement, has been cryopreserved at age 92 in Clinton Township, Michigan. He died on Saturday, July 23.

The Cryonics Institute press release is here. There are a few obituaries online, including one from the Telegraph. Chronopause, a cryonics blog, reviews the history of Ettinger and cryonics.

Ettinger's 1962 book The Prospect of Immortality and 1972 book Man into Superman inspire many transhumanists to think beyond the "inevitability" of death.

Ben Best was quoted by KurzweilAI.net on the suspension:

“Robert Ettinger deanimated [Saturday] at around 4 p.m. Eastern Time,” said Ben Best, president of the Cryonics Institute. “He was under hospice care and had an ice bath sitting by his bedside. His pronouncement and initiation of cooling was very rapid. The perfusion went well and he is now in the cooling box. Much more later.”

Ettinger's 1962 book was a turning point in human history. It represented the first time when people acquired the ambition to preserve the fine-grained structure of the human brain at death. Although Ben Franklin had imagined suspended animation centuries earlier, it wasn't until Ettinger's 1962 work that the idea became real. Ettinger participated in the first cryonic suspension in 1967.

Ettinger's first book was republished by Doubleday after it was sent to Isaac Asimov who said that the concept was scientifically sound.

I hope that Ettinger is revived in the not-too-distant future to "taste the wine of centuries unborn".

Filed under: cryonics 56 Comments
22Jul/118

Singularity Institute Announces Research Associates Program

From SIAI blog:

The Singularity Institute is proud to announce the expansion of our research efforts with our new Research Associates program!

Research associates are chosen for their excellent thinking ability and their passion for our core mission. Research associates are not salaried staff, but we encourage their Friendly AI-related research outputs by, for example, covering their travel costs for conferences at which they present academic work relevant to our mission.

Our first three research associates are:

Daniel Dewey, an AI researcher, holds a B.S. in computer science from Carnegie Mellon University. He is presenting his paper 'Learning What to Value' at the AGI-11 conference this August.

Vladimir Nesov, a decision theory researcher, holds an M.S. in applied mathematics and physics from Moscow Institute of Physics and Technology. He helped Wei Dai develop updateless decision theory, in pursuit of one of the Singularity Institute core research goals: that of developing a 'reflective decision theory.'

Peter de Blanc, an AI researcher, holds an M.A. in mathematics from Temple University. He has written several papers on goal systems for decision-theoretic agents, including 'Convergence of Expected Utility for Universal AI' and 'Ontological Crises in Artificial Agents' Value Systems.'

We're excited to welcome Peter, Vladimir, and Daniel to our team!

Filed under: friendly ai, SIAI 8 Comments
21Jul/117

Most Popular Posts This Year So Far

1. Amusing Ourselves to Death
2. Ten Futuristic Materials
3. Top 10 Transhumanist Technologies
4. Brain-Computer Interfaces for Manipulating Dreams
5. The Benefits of a Successful Singularity
6. Six Places to Nuke for Multiplier Effects
7. Response to Charles Stross' "Three arguments against the Singularity"
8. How Can I Incorporate Transhumanism into my Daily Life?
9. A Nuclear Reactor in Every Home
10. Wish
11. Terraformed Mars
12. Why "Transhumanism" is Unnecessary
13. Hard Takeoff Sources
14. X-Seed 4000
15. Kurzweil's 2009 Predictions
16. The Illusion of Control in an Intelligence Amplification Singularity
17. Collaborative Map of Transhumanists Worldwide
18. Continuing Discussion with Mr. Knapp at Forbes
19. Paul Graham's Disagreement Hierarchy
20. The Final Weapon

Filed under: meta 7 Comments
21Jul/1118

The Singularity is Far: A Neuroscientist’s View

I haven't read this, I'm just posting it because other people are talking about it.

Ray Kurzweil, the prominent inventor and futurist, can't wait to get nanobots into his brain. In his view, these devices will be equipped with a variety of sensors and stimulators and will communicate wirelessly with computers outside of the body. In addition to providing unprecedented insight into brain function at the cellular level, brain-penetrating nanobots would provide the ultimate virtual reality experience.

Article.

Filed under: singularity 18 Comments
20Jul/1175

The Last Post Was an Experiment

+1 for everyone who saw through my lie.

I thought it would be interesting to say stuff not aligned with what I believe to see the reaction.

The original prompt is that I was sort of wondering why no one was contributing to our Humanity+ matching challenge grant.

Maybe because many futurist-oriented people don't think transhumanism is very important.

They're wrong. Without a movement, the techno-savvy and existential risk mitigators are just a bunch of unconnected chumps, or in isolated little cells of 4-5 people. With a movement, hundreds or even thousands of people can provide many thousands of dollars worth of mutual value in "consulting" and work cooperation to one another on a regular basis, which gives us the power to spread our ideas and stand up to competing movements, like Born Again bioconservatism, which would have us all die by age 110.

I believe the "Groucho Marxes" -- who "won't join any club that will have them" are sidelining themselves from history. Organized transhumanism is very important.

I thought quoting Margaret Somerville would pretty much give it away, but apparently not.

To me, cybernetics etc. are just a tiny skin on the peach that is the Singularity and the post-Singularity world. To my mind, SL4 transhumanism is pretty damn cool and important. I've written hundreds of thousands of words for why I think so, but there must be something I'm missing.

To quote Peter Thiel, those not looking closely at the Singularity and the potentially discontinuous impacts of AI are "living in a fantasy world".

17Jul/1193

Why “Transhumanism” is Unnecessary

Who needs "transhumanism"? Millions of dollars are going into fields such as brain-computer interfacing, robotics, AI, and regenerative medicine without the influence of "transhumanists". Wouldn't transhumanism be better off if we relinquished the odd name and just marketed ourselves as "normal"?

Wild transhumanist ideas such as cryonics, molecular nanotechnology, hard takeoff, Jupiter Brains, and the like, distract our audience from the incremental transhumanist advances occurring on an everyday basis in labs at universities around the world. Brain implants exist, gene sequencing exists, regenerative medicine exists -- why is this any different than normal science and medicine?

Motivations such as the desire to raise one's father from the dead are clearly examples of theological thinking. Instead of embracing theology, we need to face the nitty gritty of the world here and now, with all of its blemishes and problems.

Instead of working towards blue-sky, neo-apocalyptic discontinuous advances, we need to preserve democracy by promoting incremental advances to ensure that every citizen has a voice in every important societal change, and the ability to democratically reject those changes if desired.

To ensure that there is not a gap between the enhanced and the unenhanced, we should let true people -- Homo sapiens -- be allowed to vote on whether certain technological enhancements are allowed. Anything else would be irresponsible.

As Margaret Somerville recently wrote in the Vancouver Sun:

Another distinction that might help to distinguish ethical technoscience interventions from unethical ones is whether the intervention affects the intrinsic being or essence of a person -- for instance, their sense of self or consciousness -- or is external to that. The former, I propose, are always unethical, while the latter may not be.

The intrinsic essence and being of a person is not something to be taken for granted -- it has been shaped carefully by millions of years of evolution. If we start picking arbitrary variables and trying to optimize them, the consequences could be very unpredictable. Our lust for pleasure and power could quickly lead us to a dark road of narcissistic self-enhancement and disenfranchisement of the majority of humanity.

Filed under: transhumanism 93 Comments
14Jul/1112

$18.5 Million for Brain-Computer Interfacing

Another university is opening up a BCI lab, University of Washington. It makes sense because it's near the Allen Institute for Brain Science, among other reasons. Did I mention that Christof Koch, the new Chief Science Officer of the Allen Institute, will be speaking at Singularity Summit?

Here's an excerpt of the news release:

The National Science Foundation today announced an $18.5 million grant to establish an Engineering Research Center for Sensorimotor Neural Engineering based at the University of Washington.

“The center will work on robotic devices that interact with, assist and understand the nervous system,” said director Yoky Matsuoka, a UW associate professor of computer science and engineering. “It will combine advances in robotics, neuroscience, electromechanical devices and computer science to restore or augment the body’s ability for sensation and movement.”

The text is pretty generic boilerplate, it's just the action that is important. We will likely have to wait a year or more before any interesting breakthroughs from this lab hit the news.

Filed under: BCI 12 Comments
7Jul/1137

Dale Carrico Classics

Just in case there are new readers, I want to refer them to the writings of Dale Carrico, probably the best transhumanist critic thus far. He's a lecturer at Berkeley. (Maybe The New Atlantis should try hiring him, though I sort of doubt they'd get along.) I especially enjoy this post responding to my "Transhumanism Has Already Won" post:

The Robot Cultists Have Won?

When did that happen?

In something of a surprise move, Singularitarian Transhumanist Robot Cultist Michael Anissimov has declared victory. Apparently, the superlative futurologists have "won." The Robot Cult, it would seem, has prevailed over the ends of the earth.

Usually, when palpable losers declare victory in this manner, the declaration is followed by an exit, either graceful or grumbling, from the stage. But I suspect we will not be so lucky when it comes to Anissimov and his fellow victorious would-be techno-transcendentalizers.

Neither can we expect them "to take their toys and go home," as is usual in such scenes. After all, none of their toys -- none of their shiny robot bodies, none of their sentient devices, none of their immortality pills, none of their immersive holodecks, none of their desktop nanofactories, none of their utility fogs, none of their comic book body or brain enhancement packages, none of their kindly or vengeful superintelligent postbiological Robot Gods -- none of them exist now for them to go home with any more than they ever did, they exist only as they always have done, as wish-fulfillment fancies in their own minds.

You can read the whole thing at Dale's blog.

Filed under: transhumanism 37 Comments
7Jul/1110

Matter, Antimatter Origin Theories — Baryogenesis

I remember reading somewhere that one possibility in the early universe is that a tremendous amount of matter and antimatter both formed, most of it annihilated itself, and the small amount that remained became our present matter-dominated universe. From a few casual Google searches I have not been able to find this reference. It was probably some popular physics book written in the 1990s. Possibility one in the summary below would appear to correspond to this scenario, however.

The question is that of baryogenesis, which is not well understood. Here's the background from Wikipedia:

The Dirac equation, formulated by Paul Dirac around 1928 as part of the development of relativistic quantum mechanics, predicts the existence of antiparticles along with the expected solutions for the corresponding particles. Since that time, it has been verified experimentally that every known kind of particle has a corresponding antiparticle. The CPT Theorem guarantees that a particle and its antiparticle have exactly the same mass and lifetime, and exactly opposite charge. Given this symmetry, it is puzzling that the universe does not have equal amounts of matter and antimatter. Indeed, there is no experimental evidence that there are any significant concentrations of antimatter in the observable universe.

There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of phenomena contributed to a small imbalance in favour of matter over time. The second point of view is preferred, although there is no clear experimental evidence indicating either of them to be the correct one. The preference is based on the following point of view: if the universe encompasses everything (time, space, and matter), nothing exists outside of it and therefore nothing existed before it, leading to a total baryonic number of 0. From a more scientific point of view, there are reasons to expect that any initial asymmetry would be wiped out to zero during the early history of the universe. One challenge then is to explain how the total baryonic number is not conserved.

I've been told that a lot of stuff exists outside of our local universe, but I don't want to make this more complicated than it already is.

Filed under: physics 10 Comments
7Jul/112

Experimental Support for Monkey Self-Agency

For a contemporary press release relevant to my recent debate with Alex Knapp, "Rhesus monkeys have a form of self awareness not previously attributed to them":

In the first study of its kind in an animal species that has not passed a critical test of self-recognition, cognitive psychologist Justin J. Couchman of the University at Buffalo has demonstrated that rhesus monkeys have a sense of self-agency -- the ability to understand that they are the cause of certain actions -- and possess a form of self awareness previously not attributed to them.

The study, which will be published July 6 in Biology Letters, a journal of the Royal Society, may illuminate apparent self-awareness deficits in humans with autism, schizophrenia, Alzheimer's disease and developmental disabilities.
Rhesus monkeys are one of the best-known species of Old World monkeys, and have been used extensively in medical and biological research aimed at creating vaccines for rabies, smallpox and polio and drugs to manage HIV/AIDS; analyzing stem cells and sequencing the genome. Humans have sent them into space, cloned them and planted jellyfish genes in them.

Couchman, a PhD candidate at UB, is an instructor at UB and at the State University of New York College at Fredonia. He points out that previous research has shown that rhesus monkeys, like apes and dolphins, have metacognition, or the ability to monitor their own mental states. Nevertheless, the monkeys consistently fail the mirror self-recognition test, which assesses whether animals can recognize themselves in a mirror, and this is an important measure self-awareness.

"We know that in humans, the sense of self-agency is closely related to self-awareness," Couchman says, "and that it results from monitoring the relationship between pieces of intentional, sensorimotor and perceptual information.
"Based on previous findings in comparative metacognition research, we thought that even though they fail the mirror test, rhesus monkeys might have some other form of self-awareness. In this study we looked at whether the monkeys have a sense of self agency, that is, the understanding that some actions are the consequence of their own intentions."

Continued.

Filed under: intelligence 2 Comments