Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

30Jun/1014

Audio Interview with Singularity Weblog: “Singularity Without Compromise”

Yesterday I spoke to Nikola Danaylov at the Singularity Weblog. The title of the podcast comes from a quote I made during the interview, when Nikola asked me whether or not he thought we would need to sacrifice aspects of our humanity to go through a Technological Singularity. My response was that if we do the Singularity right, we need not compromise in any fashion: human beings from techno-enthusiasts to the Amish will be enthusiastic with the results.

During the podcast, Nikola asked me what I thought humanity's chance of surviving the Singularity would be, and I said that my current estimate was around 25%, but that could change depending on what happens, and how much effort is put towards a positive Singularity.

Filed under: singularity 14 Comments
28Jun/105

Open Source Ecology: Replicable, Resilient, Post-Scarcity “Viral Villages”

Marcin Jakubowski talks about Factor E Farm, Open Farm Tech, the Global Village Construction Set.

http://factorefarm.org/
http://openfarmtech.org/
http://openmanufacturing.org/

Filed under: technology, videos 5 Comments
28Jun/1012

Singularity Hub Posts About the Summit 2010

Singularity Hub, one of the best websites on the Internet for tech news (along with Next Big Future and KurzweilAI news) has posted a reminder on the upcoming Singularity Summit in San Francisco, and a promise that they will provide excellent coverage.

Register before July 1st, before the price goes up another $100! We also have a special block of discounted rooms at the Hyatt available -- $130/night instead of the usual $200.

Sorry the Summit is $485 and will be $585 and then $685. We fly all the speakers out and cover all their expenses, there are twenty speakers, do the math. Profits from the Summit go to the Singularity Institute for our year-round operations and Visiting Fellows program, which provides us with a community of writers, speakers, and researchers to continue our Singularity effort until it is successful.

If you want to organize a cheaper annual event related to the Singularity, feel free to do so. We hold a workshop after the event for academics, so we get to tack on another event to maximize value and productivity for those who investigate the Singularity as part of their profession. I'm sure there will be plenty of informal "workshops" on the Saturday and Sunday after the talks in local bars and restaurants, in any case.

Remember -- the Singularity is the most important issue facing humanity right now. If we don't do what we can to ensure that it goes well for humanity, no one else will. We have a limited amount of time until the technological barriers between us and the Singularity collapse, and then intervention will be difficult if not impossible.

Filed under: SIAI, singularity 12 Comments
28Jun/1013

Nadrian Seeman Shares $1M Nanotech Prize

Congratulations to Ned Seeman, who is sharing the $1 million Kavli Prize in nanoscience with IBM's Don Eigler, who was behind the team that made the IBM logo in atoms. Seeman was awarded the prize for the discovery of structural DNA nanotechnology, in 1979 according to the Kavli website. Seeman has given presentations on DNA nanotechnology at the Foresight Institute conferences and at last year's Singularity Summit, and recently made a major breakthrough in nanotechnology with a nanoscale assembly line.

I had the opportunity to meet Dr. Seeman at a Center for Responsible Nanotechnology conference in Tuscon in 2007. He was skeptical about the idea of achieving molecular manufacturing within the next couple decades.

Will macroscale molecular manufacturing be achieved by a structural DNA route, the "Tattoo Needle" architecture, the foldamer route, the Waldo route, the diamondoid route, or something else? That is the question all the cool kids are asking.

28Jun/1011

Patrick Lin in London Times: “The Reality of Robocops”

Patrick Lin is spreading the valuable message of roboethics:

They have everything the modern policeman could need - apart from a code of ethics. Without that, a Pentagon adviser fears, the world could be entering an era where automotons pose a serious threat to humanity.

The robots need to be hack-proof to prevent perpetrators from turning them into criminals, and a code of ethical conduct must be agreed while the technology is nascent.

The article mentions that there are currently over 7 million robots in operation, about half of them cleaning floors.

Filed under: risks, robotics 11 Comments
23Jun/1023

More Singularity Curmudgeonry from John Horgan

John Horgan goes on the offensive against the Singularity concept on his relatively new blog at SciAm.

My own skepticism is based on simple comparisons of Kurzweil's claims with what is actually happening in science. For example, Kurzweil contends that reverse-engineering the brain isn't that big a deal. "The brain is at least 100 million times simpler than it appears because the design is in the genome," he wrote on the blog Posthumans. "The compressed genome is only about 50 million bytes," which is "a level of complexity we can handle."

I agree with John that this estimate of the difficulty of AI is an oversimplification. It carries the assumption that AI will be a copy of the human brain, which isn't necessarily true. It also ignores the complexity of the process of neurogenesis and continued development. The real brain is much, much more complex than the portion of the genome that codes for it, and it probably won't be until after the Singularity until we understand the details of how the brain is created from the genetic code.

Is it really so far-fetched to believe that we will eventually uncover the principles that make intelligence work and implement them in a machine, just like we have reverse-engineered our own versions of the particularly useful features of natural objects, like horses and spinnerets? News flash: the human brain is a natural object.

I think Kurzweil is wrong and overconfident on a lot of specific points, but I appreciate his overall vision.

Filed under: singularity 23 Comments
23Jun/1044

NYT Blog: Waxing Philosophical on Watson and Artificial Intelligence

There's more follow-up material on AI from The New York Times. Here's the blurb:

What is artificial intelligence? What issues are raised by the current work on creating machine minds? Here are some philosophical questions and creative activities stemming from the ongoing developments in the pursuit of conscious computers and inspired by the Times Magazine article on I.B.M.'s Watson, a machine that can play "Jeopardy!".

Fun with AI and philosophy!

Filed under: AI 44 Comments
21Jun/1087

The World the Singularity Creates Could Destroy All Value

From the letters to the editor section of The New York Times...

Sizing Up the Singularity

To the Editor:

Re: "Merely Human? So Yesterday" (June 13), which described the Singularity movement and how it envisions a world of mind merging with machine, to conquer disease and even old age:

But if the movement succeeds beyond its wildest dreams, the world it would create could destroy all that we are that is of true value: our suffering and thus our joy, against which we determine the value of everything else.

Achilles, as depicted in the movie "Troy," put it well: "I'll tell you a secret. Something they don't teach you in your temple. The Gods envy us. They envy us because we're mortal, because any moment might be our last. Everything is more beautiful because we're doomed. You will never be lovelier than you are now. We will never be here again."

Let us never stop striving for us, but let us do it as us so that we still have reason for striving.

Ryan Andrews

Columbia, Mo., June 13

In one sense, I agree with this commenter. The Singularity could lead to a world where mankind is snuffed out by machines, or drives itself into poverty due to falling wages triggered by rapid upload copying, or our complex values are sidelined by mindless replicators, or we wirehead ourselves into oblivion. The Kurzweilian facet that argues "everything will be fine because we will merge seamlessly with our creations" is mistaken. Everything may not be fine. Whether or not the future turns out alright will depend on the actions taken between today and the Singularity. After the Singularity, it will become impossible to rewind, so we had better get it right the first time.

Of course, I disagree with the notion that life is only beautiful because we're doomed, as the commenter quotes. This view is the result of a memetic lineage that has taught us to cope with suffering by embracing it in a Stockholm Syndrome fashion.

The Singularity movement is no place for uncritical, facile technophilia. Instead of inevitable ascension to cyber heaven, the Singularity can be more accurately viewed as the arrival of a swarm of new superintelligent alien species, albeit aliens that we will create with our own hands, at least initially. Without extremely careful engineering (not "business as usual" -- profit motive), these aliens might not care for us much. Our efforts to augment ourselves to be more like them could prove hopeless at first, because of the great complexity and expense inherent in any prototype experimentation. These transhumans, whether human-derived or AI, could have a massive impact on society (perhaps taking over the planet) before the rest of mankind catches up with them, if they even let us.

That's the reason the Singularity Institute exists -- because we reject blind embrace of increasing technological power. We must moderate that power with careful choices and the willingness to self-limit to an extent.

Filed under: singularity 87 Comments
18Jun/1086

A Few Items

There's an ongoing uploading debate in the comments with Aleksei Riikonen, Mark Gubrud, Giulio Prisco, myself, and others. The topic of uploading is the gift that keeps on giving -- the dead horse that can sustain an unlimited beating.

There is a new open letter on brain preservation -- sign the petition! Also, there will be workshops on uploading after the Singularity Summit 2010 this August in San Francisco. A big congrats to Randal Koene, Ken Hayworth, Suzanne Gildert, Anders Sandberg, and everyone else taking the initiative to move forward on this.

One last thing: ghost hunting equipment. Harness the power of ghosts, take over the world.

16Jun/103

Assorted Links 6/16/10

Patrick Millard's ongoing coverage of Biosphere 2
Anders Sandberg: Seeing the World
Indiana Law Interfering With Citizens' Free Speech Rights Found Unconstitutional
RepRap blog: Open Source Scanning Tunneling Microscope
Category: Mendel Development at RepRap Wiki
Open Source Ecology
Jim Von Ehr says Zyvex will Achieve Digital Matter from Building Blocks by 2015 and Rudimentary Molecular Manufacturing by 2020
Whole Brain Emulation: the Logical Endpoint of Neuroinformatics
Protein Computing, Bio-based Quantum Computing and Nano-sized biolasers from ExQor Technologies
Eurekalert: Eating processed meats, but not unprocessed red meats, may raise risk of heart disease and diabetes
'Fountain of youth' steroids could protect against heart disease
Want to Get Smarter, Faster? Sleep 10 Hours: NPR
6-story Jesus statue in Ohio struck by lightning
eWeek: Who's Afraid of the Singularity?
TIME -- Tastes Like Chicken: the Quest for Fake Meat
ABC Science: Cyborg rights "need debating now"
YouTube: iRobot 710 Warrior with APOBS
Jason Silva in Vanity Fair: Why We Could All Use a Heavy Dose of Techno-Optimism
Technology Review: Microrobotics Competition Shows Impressive Feats
Technofascism Blog: US Department of Defense Wants Robot Army by 2034

Filed under: random 3 Comments
14Jun/1018

Reducing Long-Term Catastrophic Artificial Intelligence Risk

Check out this new essay from the Singularity Institute: "Reducing long-term catastrophic AI risk". Here's the intro:

In 1965, the eminent statistician I. J. Good proposed that artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements: AIs would be smart enough to make themselves smarter, and, having made themselves smarter, would spot still further opportunities for improvement, leaving human abilities far behind. Good called this process an "intelligence explosion," while later authors have used the terms "technological singularity" or simply "the Singularity".

The Singularity Institute aims to reduce the risk of a catastrophe, should such an event eventually occur. Our activities include research, education, and conferences. In this document, we provide a whirlwind introduction to the case for taking AI risks seriously, and suggest some strategies to reduce those risks.

Pay attention and do something now, or be eliminated by human-indifferent AGI later. Why is human-indifferent AGI plausible or even likely within the next few decades? Because 1) what we consider "normal" or "common sense" morality is actually extremely complex, 2) the default morality for AIs will be much simpler than #1 (look at most existing AI/robotics goal systems -- they're only as complex as they need to be to get their narrow jobs done), simply because it will be easier to program and very effective until the AI reaches human-surpassing intelligence, 3) a superintelligent, super-powerful, self-replicating AI with simplistic supergoals will eventually eliminate humanity through simple indifference, the way that humanity has made many thousands of species extinct through indifference. Over the course of restructuring the local neighborhood to achieve its goals (such as maximizing some floating point variable that represents the bank account it once aimed to maximize), the complex, fragile structures known as humans will fall by the wayside.

The motivation will not derive from misanthropy, but basic AI drives such as the drive to preserve its utility function and defend that utility function from modification. These drives will appear "naturally" in all AIs unless explicitly counteracted. In fact, this should be experimentally verified in the near future with continuing progress towards domain-general reasoning systems. Even AIs with simple game-playing goals, given sufficiently detailed models of the world in which the games are played (most AIs lack such models entirely), will start to spontaneously expand into strategies like deceiving or confusing their opponent, perhaps surprising their programmers. Progress in this area is likely to start off incremental and eventually speed up, just like completing a puzzle gets easier the closer you are towards the end.

Even a "near miss", such as an AI programmed to "make humans happy", could lead to unpleasant circumstances for us for the rest of eternity. An AI might get locked into some simplistic notion of human happiness, perhaps because its programmers underestimated the speed at which a seed AGI could start self-improving, and didn't place enough importance on giving the AGI complex and humane supergoals which remain consistent under reflection and self-modification. The worst possible futures may be ones in which a Singularity AI keeps us alive indefinitely under conditions where our existence is valued but our freedom is not.

Filed under: AI, risks, SIAI, singularity 18 Comments
12Jun/1028

In the Singularity Movement, Humans are So Yesterday

There's a new lengthy article on the Singularity from The New York Times, slated to appear on the front page of tomorrow's business section, I'm told.

Filed under: singularity 28 Comments