Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

10Feb/0951

The Three Singularity Schools, Kurzweil, and Superintelligence

So, there's been some interesting debate lately about Singularity University, which George Dvorsky has kindly summarized for us here. I'm not going to weigh in on that debate, because I think that a list of the academic tracks isn't enough to pass judgement, and that we actually have to wait for the course materials (which, according to David Orban from personal communication with Ray Kurzweil, will be released under a Creative Commons Attribution license) to say anything meaningful. Otherwise, I think that early reactions to the idea are mostly based on one's prior opinion of Kurzweil's stuff rather than reacting to anything genuinely new.

The point of this article is to remind the reader that there are three schools of Singularity thought -- this is so fundamental, but so few people are aware of it. It should be the first thing that people learn when introduced to the concept. As I argued in 2007, the word "Singularity" has lost all meaning, but if we're stuck with it, we should at least pull apart three of the major meanings it tends to have. (Though the number of meanings it has in practice are almost unlimited, generated by Silicon Valley socialite types who are trying to look cool but only know about the Singularity from a few short blurbs on places like CNET.) The three schools are Accelerating Change, Event Horizon, and Intelligence Explosion. The full talk on the subject is available here.

When I was introduced to the Accelerating Change school by reading The Age of Spiritual Machines at age 16, I thought the concept had a lot of explanatory power. I still do. However, Mr. Kurzweil's presentation of the idea gives it an aura of inevitability that is misleading, for instance predicting AGI in precisely the year 2029, and a rupture in the fabric of our understanding in 2045. He addresses the concept of existential risk at length in The Singularity is Near, curtailing the implications of inevitability, but his less thoughtful fans have a tendency to miss this.

When I bought this domain in 2003, I was still very excited about the Accelerating Change school of the Singularity, hence the name "accelerating future". Since then, I've become more moderate in my enthusiasm for the idea. One problem is that the value of different varieties of technological advancement, or even technological advancement in general, are highly subjective. So even if a given technological metric is advancing at a loose exponential, this is not impressive to someone who sees linear practical returns from that particular advancing metric. The highly quantitative nature of the Accelerating Change analysis is also especially likely to provoke and alienate those averse to technological determinism. Still, I think the world is far more technologically deterministic than most humanities types would like to believe.

In the end, I find the Event Horizon and Intelligence Explosion schools of the Singularity more acutely relevant to our future than the Accelerating Change analysis. These other schools point to the unique transformative power of superintelligence as a discrete technological milestone. Is technology speeding up, slowing down, staying still, or moving sideways? Doesn't matter -- the creation of superintelligence would have a huge impact no matter what the rest of technology is doing. To me, the relevance of a given technology to humanity's future is largely determined by whether it contributes to the creation of superintelligence or not, and if so, whether it contributes to the creation of friendly or unfriendly superintelligence. The rest is just decoration.

Take space colonization for example. Does it matter to the future of humanity if we spend billions of dollars on building space stations and missions to the Moon? Only insofar as it influences how and if superintelligence is created, and as far as I can tell, it doesn't. That exclusivity about superintelligence has caused some to question my sanity, but that's the same reaction I would expect if I were an intelligent Homo habilis in an alternate universe ranting about how developments in technology were only relevant insofar as they gave us the ability to produce Homo sapiens or something even smarter. People are so preoccupied by the impact of humans that they fail to realize that the creation of transhumans would sideline much of our ongoing impact in the global sense.

That's the thing about superintelligence that so offends human sensibilities. Its creation would mean that we're no longer the primary force of influence on our world or light cone. Its funny how people then make the non sequitur that our lack of primacy would immediately mean our subjugation or general unhappiness. This comes from thousands of years of cultural experience of tribes constantly killing each other. Fortunately, superintelligence need not have the crude Darwinian psychology of every organism crafted by biological evolution, so such assumptions do not hold in all cases. Of course, superintelligence might be created with just that selfish psychology, in which case we would likely be destroyed before we even knew what happened. Prolonged wars between beings of qualitatively different processing speeds and intelligence levels is science fiction, not reality.

It's interesting that quotes which were originally fielded to back up the Intelligence Explosion school find themselves repurposed in The Singularity is Near to argue for the Accelerating Change school. There is actually somewhat of a merger between the two schools in the book, to the point where one might find it difficult to disentangle them, and condemn one idea for the flaws of the other. For instance, on page 10 of the book is a quote by myself, from a 2003 interview with Phil Bowermaster. The quote goes, "When the first transhuman intelligence is created and launches itself into recursive self-improvement, a fundamental discontinuity is likely to occur, the likes of which I can't even begin to predict." Well, that quote is at odds with Mr. Kurzweil's presentation in the rest of the book, which argues that AI will surpass the human brain around 2029 but the rupture of predictability won't occur until 2045. According to my quote, that rupture would occur almost instantaneously.

A related quote by Eliezer Yudkowsky appears on page 35: "Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve... [T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from "impossible" to "obvious". Move a substantial degree upwards, and all of them become obvious." These quotes are hammering home the point that superintelligence is the main thing, and all that tangential stuff about genetic engineering, nanotechnology, robotics, and so on, are just the inventions of merely human-level thinkers, and in the long run their impact will be measured by their contribution (or lack thereof) to superintelligence.

I hope that our differences in thinking from Mr. Kurzweil have been made clear.

10Feb/091

Drexler on How Nanotech Animations Should Be Slower

As you might have heard, Eric Drexler got a blog a while ago (last October), and has been producing some nice content. One interesting post from December was about how Drexler considers diamond mechanosynthesis (atom-by-atom diamond fabrication) a bad short-term objective, instead arguing we use protein or pyrite, which are easier to work with. He dispels the notion that protein would be a ridiculous material to use for mechanosynthesis (was there one?), encouraging us to think of protein as horns, made of hard keratin, rather than meat, which is over a million times weaker. Horns and meat -- who said nanotechnology wasn't exciting?

Today's post has to do with how the colorful molecular machinery gifs floating around on the web mislead scientists into thinking that the very idea of MNT, and by extension him, are nutty, because the gears in those animations are running at speeds comparable to their (apparent) thermal motion, which would cause them to overheat and break down almost immediately. Examples of such animations can be found on Drexler's blog, at CRN, and a past post of mine here. The one he uses on his blog looks even more speeded up than usual. Drexler provides a video from J. Storrs Hall showing realistic gear motion -- very slow relative to random thermal motion. The bearing in the last video is rotating at 1 GHz.

This most recent post is related to a post of his from December that points out some really cool biomolecular videos and remarks, "the videos lie because they must". He writes, "They lie about how biomolecular machines move. Where they show smooth, purposeful-looking mechanical movement, the reality is instead a frenetic dance of Brownian motion." There's a further page on Nanorex about the "stroboscopic illusion" -- the limited frame rate of animations captures atoms dancing around at only 24 frames per second, even though they are actually dancing around much faster than that. To display the gear motion in a time frame that wouldn't cause the viewer to get bored, you have to "fast-forward" the video, producing an illusory effect of slower thermal vibration relative to the purposeful motion.

Filed under: nanotechnology 1 Comment
9Feb/095

Partial Design for a Macro-Scale Machining Self-Replicator

I'm reading a somewhat obscure piece of data, a 1995 mailing list post to sci.nanotech by Chris Phoenix titled, "Partial design for macro-scale machining self-replicator". It's really interesting.

The design was prompted by another poster, Will Ware, who wrote "Speaking as a practicing engineer, I think there would actually be a lot of value to making macroscopic replicators, even if there's no new science involved. The absence of new science does not mean an engineering job is trivial."

Phoenix's design is based around the idea of a substance whose cured form is hard enough to easily machine its non-cured form. The uncured form is converted selectively into the cured form through exposure to UV rays from the Sun. Phoenix describes his design:

Here's the intended capability: To machine blocks of soft material into complex parts, with typical dimensions of a few inches, maximum dimension 20 inches, precision 1/100 inch, smooth circles of any diameter (made by rotating a platform with a cutter held off-center); minimum hole/concave curve 1/16 inch diameter, plus an ability to cut narrow V grooves. To assemble parts into machines with volume of up to a yard cubed. To execute a long, complex program for these operations, with the possibility of detecting and correcting errors.

The rest is quite fascinating, I suggest you read it. There's a potential science fair project or multi-billion dollar company idea in there for anyone that wants to use it.

I'm pretty interested in the idea of macro-scale self-replicators in general. I've read practically all the material that's out there on the topic, including all of Kinematic Self-Replicating Machines. (By the way, if you join the Lifeboat 500, the people that give $1,000/year to the Lifeboat Foundation, you get a free copy of this book.) Now I'm at the point of reading whatever obscure stuff I can find and thinking about original content. If you know about more obscure stuff or have interesting original ideas, please post them in the comments.

I was poking around in another email discussion that was linked from the Wikipedia page on self-replicating machines, this one was also in 1995 but it was on Extropians. Anthony Napier said:

Is a macro-scale self replicating system feasible?

The only existing one I know of is our entire Earth wide industrial complex. Nothing smaller than Earth scale, nothing bigger than molecular level.

Is that really true? I would think that at least one blacksmith has used all the tools in his shop to make a completely new set of tools, then he had a child that became a blacksmith, thereby closing the loop. (With a little help from Nature, of course.) So we can point to self-replicators that are quite a bit smaller than the Earth scale.

Since we know so little about machine self-replication, it's premature to assume that we know what size scale it will first emerge on. I used to assume nano (because that's where most biological self-replicating machinery exists), but now I think that macro or micro might have easier solutions.

A self-replicating device might be much larger than anyone would suppose. For instance, it might involve a population of thousands of little specialized robots, resembling a little industrial village. No individual robot would be able to fabricate all the parts that make themselves up, but in cooperation, they achieve 100% closure.

Free free to brainstorm concepts for self-replicating machines at any scale. If such a machine were developed, it would prove helpful to our existential risk mitigation agenda by demonstrating that such a thing is possible and that regulations are necessary. It could also have a profoundly beneficial economic impact, by allowing the cheap fabrication of spinoff products built out of materials in the self-replication loop.

Filed under: robotics 5 Comments
9Feb/0932

Power Density Graph for Nanotechnology Products

nanotechproducts

I posted this graph about two years ago, but I'm going to post it again for more exposure. It's a graph that maps products against their power requirements. Power divided by volume is power per unit volume, or power density. Throughout the Industrial Age, the maximum and average power density of products has been increasing. The graph below includes nanobots, microbots, MNT "superproducts" and MNT "ultrastructures", things which either don't exist right now or are in the earliest stages of development. See this page from the Center for Responsible Nanotechnology for more background.

6Feb/0955

Sorry, We’re All Dumb

Hey, human philosophers -- I've got some bad news. It turns out that Homo sapiens probably isn't the qualitatively smartest possible being. Given that Homo erectus was way dumber than humans (they couldn't even build boats!), and based on what we know about the probable mind design space, Homo erectus has a very low Levenshtein distance from Homo sapiens, which suggests not only that we're dumb, but that we're way dumber than what's possible. This bodes ill for the quality of our interpretation of the universe relative to the best possible interpretation, or even an "average" interpretation by multiverse standards.

There are many reasons we're dumb. First off, we're just about as dumb as it's possible to be and start a civilization. How do I know? Well, most other members of the genus Homo had plenty of time to build agricultural civilizations, but they were too unintelligent to get off the ground. Homo sapiens was just barely smart enough to do the trick. And like a self-replicating machine that moves from 99.9% closure to 100% closure, the payoff was big.

Evolution doesn't care about optimal solutions. It cares about "good enough" solutions. Good enough to intimidate your neighbor. Good enough to survive the blizzard. Good enough to get a clue about the state of the world. Unfortunately, popular mythology about a Benevolent Creator has permeated every aspect of popular culture and personal thought, misleading us to believe that we were created in the image of Optimality, when Darwin knows that such a thing never happened.

We are bags of meat. Fun, thoughtful, conscious bags of meat, but bags of meat nonetheless. In retrospect, our value will likely be determined by how well we moved ourselves out of the bag-o'-meat stage. If we fail, there will be no one around left to laugh at us. If we succeed, our descendants (or future selves) will breath a sigh of relief at how we barely avoided annihilating ourselves.

Our philosophical musings, cultural creations, and genius works will merely be viewed as scratchings on the cave wall from the superintelligent perspective. To assume otherwise is to act like a Homo erectus who is damn proud of himself and his kind because he knows how to make a dozen stone tools and has memorized the effects of 100 different forest plants. The apparent magnitude of our accomplishments, including those of Einstein, is merely a side-effect of how low our standards are. To another species on another world whose intelligence was crafted in the furnace of selection pressures more intense than ours, quantum mechanics is obvious from the get-go. The only thing funnier about how dumb we are to take so long to figure it out is our self-importance at having finally figured it out.

Coming to terms with how dumb we are is a necessary facet in our maturation process as a species. It doesn't mean that we should be disrespectful to one another. You can be a Gandhi or Martin Luther King and still realize that the intelligence of the human species ain't all that. There isn't a grain of misanthropy about it. Compared to the worms of the earth and the shrimp in the sea, we're fricking geniuses. Go us.

If we're not the smartest beings possible, does that destroy universal human rights, like some say it does? Well, yes, if a superintelligence that doesn't care about us takes control. Luckily, we get to build the first superintelligence, and the way the superintelligence evolves will be a function of its initial state, so let's build a superintelligence that cares about us. If all else fails, we can just take the nicest person we can find and upgrade them to superintelligence first.

Of course, there's also the possibility of attempting to prevent the creation of superintelligence indefinitely. Only, it would fail. If we concede the creation of superintelligence is imminent, then we have to determine how that process will go. If you're only interested in criticizing present proposals, please offer your own. If you're skeptical that superintelligence will be created in the next century, then why are you here? Perhaps you followed the wrong link.

6Feb/0912

Transhumanism — It’s Small

howpopularistranshumanism

Check out the above image I drew in MS Paint.

As you can see, the image is somewhat self-explanatory. Transhumanists are a minority of people at the intersection of those who care about nanotech, biotech, and other advanced technologies.

What does this mean? Well, one interesting implication is that if someone becomes interested in all three, they're much more likely to be a transhumanist. There are a few transhumanists only interested in one or two of the fields that aren't portrayed in this image, but they're rare.

Another obvious implication is that transhumanists are people interested in science. This subjects them to the same anti-science, anti-intellectual prejudice that is extremely common everywhere in the world. Ever tried finding the science section at a large bookstore? The way it takes up less than 1% of the total shelf space is somewhat of a giveaway of how popular science is with the public.

The other implication is that a disproportionately large number of people in positions of control and power and nanotech, biotech, and other advanced fields are transhumanists or have transhumanist sympathies. That's one of the ways that transhumanists can personally influence the future. If transhumanists had no control over scientific development, then we'd just be people who exclusively talk and speculate. It would still make for interesting discussion, but it wouldn't directly influence the future.

Because of the controversial nature of transhumanism, many people in these powerful positions stay subtle about their associations with the movement. This makes sense, because a lot of people don't really like transhumanism. They see it as unnatural.

Filed under: transhumanism 12 Comments
6Feb/094

Professor Drell: Eliminating the Threat of Nuclear Arms

Just in the news on Eurekalert today, more people agreeing that nuclear arms control is a big deal and needs to be addressed immediately:

President Barack Obama has made his intention of eliminating all nuclear weapons a tenet of his administration's foreign policy. Professor Sidney Drell, a US theoretical physicist and arms-control expert, explains in February's Physics World what Obama needs to do to make that honourable intention a reality.

Professor Drell, a professor emeritus at the SLAC National Accelerator Center, a senior fellow at Stanford University's Hoover Institution and an adviser on technical national security and arms-control for the US Government, has recently co-authored a report called Nuclear Weapons in 21st Century US National Security, in collaboration with the American Association for the Advancement of Science, the American Physical Society and the Center for Strategic and International Studies.

In his article for Physics World, he explains how and why there is need now, more than ever, to introduce globally ratified systems to control the spread of nuclear arms.

Professor Drell explains: "The world is teetering on the edge of a new and more perilous nuclear era, facing a growing danger that nuclear weapons – the most devastating instrument of annihilation ever invented – may fall into the hands of 'rogue states' or terrorist organizations that do not shrink from mass murder on an unprecedented scale.

His article makes two recommendations to Obama and his team. The first is to 'revisit Reykjavik' – Reykjavik hosted a summit in 1986 where former US President Ronald Reagan and then Soviet premier Mikhail Gorbachev agreed to begin reducing the size of their respected countries' nuclear arsenals. As the US and Russia still possess more than 90 per cent of the world's nuclear warheads, it is imperative that they take the lead, Drell says.

Drell's second recommendation is that the new Obama administration should adopt a process for bringing the Comprehensive Test Ban Treaty (CTBT) into effect. "The new administration should initiate a timely bipartisan, congressional review of the value of the CTBT for US security," he says.

Drell concludes: "With these two steps outlined above, President Obama has a historic opportunity to start down a practical path towards achieving his stated goal of 'eliminating all nuclear weapons.'"

Will the doves beat the hawks on this one? We can only hope.

Filed under: nuclear, risks 4 Comments
5Feb/095

Changes with the Singularity Institute

Anyone who reads Overcoming Bias regularly already knows this, but a new site for the Bayesian-wannabe community is being created titled Less Wrong which will feature user-submitted posts and a voting system for voting up popular content.

Also announced in that post was that Tyler Emerson is leaving the Singularity Institute (SIAI) as Executive Director, while Michael Vassar is coming on board as President. I just wanted to say congratulations to Tyler for an amazing tenure. Thanks to his efforts, we have the Singularity Summit, which have brought SIAI much credibility and connected many people together. Tyler also has networked with some of the brightest minds in Silicon Valley and around the world to boost their awareness of the Singularity meme. His leadership in the annual SIAI Challenge Grants has ensured that the organization has kept moving forward, especially funding the important work of Eliezer Yudkowsky.

Meanwhile, Michael Vassar has the difficult task of raising funds and manpower for the most important goal -- building a Friendly seed AI before someone else builds a self-improving AI that is morally vacant. This goal will be extremely challenging -- as everyone knows, Artificial General Intelligence is no walk in the park. Thankfully, the SIAI has at its disposal several math and programming geniuses who are devoted to the goal of Friendly AGI.

Therefore, assuming continuing growth, I am optimistic that the goal might be achieved within 30-50 years (much uncertainty in these things) with funding of $3-5 million per year. Because it seems extremely unlikely that any large government or grant-giving organization will take Friendly AI seriously before it's too late, that means contributions must come from individuals like you. Remember that even small donations count, because they set a precedent and play an essential part in getting the social momentum rolling. Oftentimes, a millionaire or billionaire won't donate a substantial amount until they see hundreds or thousands of other people contributing smaller amounts.

Meanwhile, advocates of Friendly AI (and there are many) need to speak up and express their support in "mainstream" communities (like Digg, Reddit, the WIRED crowd, Silicon Valley hacker/entrepreneur groups, etc.) Natural shyness and unwillingness to be bold do damage in this area. Very smart people, the kind that might support Friendly AI as a cause, have a tendency to be slightly or significantly socially awkward, but I believe that this tendency can be overcome in many with self-confidence and passion in their life and goals.

Filed under: SIAI 5 Comments