Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

30Nov/090

Naked Mole Rats Return

Naked mole rats -- is there anything they can't do? A University of Illinois at Chicago press release reminds us that mole rats can withstand oxygen deprivation for up to 30 minutes, which may give us clues for protecting the brain from stroke.

Another recent brain-related news item concerned therapeutic hypothermia to minimize trauma to injured brain issue. It seems as if there is a wave of research in this direction.

Filed under: science No Comments
30Nov/0918

Foreign Policy Lists Cascio, Kurzweil, and Bostrom Among Their List of “Top 100 Global Thinkers”

Wow! Congratulations to Nick Bostrom, Ray Kurzweil, and Jamais Cascio for being selected for Foreign Policy's first annual list of Top 100 Global Thinkers. Their associated writeups can be found here.

Ray Kurzweil: "for advancing the technology of eternal life".
Jamais Cascio: "for being our moral guide to the future."
Nick Bostrom: "for accepting no limits on human potential."

Two transhumanists and one "non-transhumanist transhumanist" on the list!

Scanning the list, another notable names are Richard Dawkins, Christopher Hitchens, Steven Chu, Henry Kissinger, Peter Singer, Linus Torvalds, and Larry Summers.

#19 is Gladwell. He has been keeping his igon values well calibrated, I see. Here's a quote from his associated write-up:

By making surprising arguments seem obvious, Gladwell has added a serious dose of empiricism to long-form journalism and changed how we think about thought itself.

This sentence causes pain in my mind. I am shocked at how Gladwell is perceived as a scientific writer by the broader public, even intellectuals. His arguments contradict science. This gives me a window into the scientific standards of the "intellectual elite" behind Foreign Policy.

Filed under: policy 18 Comments
30Nov/096

Good.is: Building the “Everything Machine”

My latest article (#3) in the Singularity series on Good.is is up, a piece that describes exponential manufacturing titled Building "The Everything Machine". Meanwhile, Roko's article on "Why the Fuss About Intelligence?" is the 2nd most discussed article on the site in the last week. I will repost my article here for further discussion, but I also encourage you to register on the site and comment there. Here it is:

Building the "Everything Machine"

Nanotechnology and exponential manufacturing could help us make whatever humanity needs, atom by atom.

Part three in a GOOD miniseries on the singularity by Michael Anissimov and Roko. New posts every Monday from November 16 to January 23.

Last week, Roko talked about how human intelligence made civilization possible, and how genuinely smarter-than-human intelligence—what some call “superintelligence”—would change everything, by magnifying nearly all of our capabilities. 

It is important to note that organizations or countries are not smarter-than-human intelligences any more than a tribe of chimps is a smarter-than-chimp intelligence. We are talking about thinkers with fundamentally improved cognitive architectures, either through brain-computer interfacing or the creation of creative, flexible, brilliant artificial intelligence. Engineered intelligences with greater memory, creativity, pattern-matching capabilities, decision-making skills, self-transparency, and self-modification abilities.

This category of enhanced intelligences may not be as far away as you think. MIT scientists are already working on optically-triggered brain-computer interfaces that could link up many thousands of neurons to computers in the near future. Ed Boyden, who works at the MIT Media Lab, has called for the creation of an “exocortex” that assists our natural brains with an external, artificial cognitive assistant, also called a “co-processor.” We may even discover drugs or gene therapies that qualitatively improve intelligence by increasing the speed at which neurons can communicate, as was recently done with a rat, Hobbie-J.

When discussions of superintelligence crop up, a common question that is asked is, “okay, these entities are smarter-than-human, but wouldn’t they still be very limited by their environment and the intelligence of humans they have to work with?” Couldn’t we just pull the plug on a very clever artificial intelligence? Wouldn’t an enhanced human intelligence be limited by the slower people around it?

Not necessarily. One way superintelligent entities could leapfrog human industrial infrastructure and communication time lag would be by creating self-replicating manufacturing units, which might be based on synthetic biology or just sophisticated robotics. There already exists a self-replicating manufacturing unit today: RepRap (short for Replicating Rapid-prototyper), developed by a team at the University of Bath in Britain. It just requires human assistance for assembly—from there, the machine can print out practically all of its own parts, except for a few standard parts like computer chips. Completely autonomous self-replication is on the horizon.

The ultimate self-replicating manufacturing unit would be based on nanoscale fabrication—the rapid manipulation of individual atoms to build large products from raw materials. In 1959, the legendary physicist Richard Feynman gave a talk to the American Physical Society called “There’s Plenty of Room at the Bottom.” During the talk, he said “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom.” Since Feynman’s talk, we have made leaps and bounds towards the goal of bottom-up manufacturing, building tiny robotic arms that can manipulate single atoms, molecular switches, gears, “nanocars,” even a nanoscale walking biped.

If we could design and fabricate the appropriate nanoscale machines and put them into a system capable of building all its own parts, we’d have something called a nanofactory, or to put it another way, an “everything machine.” The earliest nanofactories might only build products out of a couple types of atoms, say carbon and hydrogen, but they would have a tremendous impact because they would be automated by necessity, could self-replicate, and would be capable of building almost any chemically stable structure (as long as it used atoms the machine could handle) with atomic precision. Powered by the Sun and using purified natural gas for feedstock molecules, these nanofactories could quickly and easily build huge numbers of residences, greenhouses, appliances, medical equipment, water purification equipment, and much more, at a cost thousands of times lower than the manufacturing technology of today.

Humans are making progress towards nanofactories today, but I’ll bet that smarter-than-human intelligences could make much more rapid progress. In fact, it’s possible that the most direct route to nanofactories is through smarter-than-human intelligence.

And if you combine a smarter-than-human intelligence with self-replication and nanoscale production, it’s difficult to put a limit on how quickly superintelligence could change the world.

Michael Anissimov is a futurist and evangelist for friendly artificial intelligence. He writes a Technorati Top 100 Science blog, Accelerating Future. Michael currently serves as Media Director for the Singularity Institute for Artificial Intelligence (SIAI) and is a co-organizer of the annual Singularity Summit.

30Nov/0910

Hanson: Philosophy Kills

Robin Hanson found a skeptical Bryan Caplan when the former explained his positions on cryonics to the latter. ("The more I furrowed my brow, the more earnestly he spoke.") Caplan said:

What disturbed me was when I realized how low he set his threshold for [cryonics] success. Robin didn’t care about biological survival. He didn’t need his brain implanted in a cloned body. He just wanted his neurons preserved well enough to “upload himself” into a computer. To my mind, it was ridiculously easy to prove that “uploading yourself” isn’t life extension. “An upload is merely a simulation. It wouldn’t be you,” I remarked. …

“Suppose we uploaded you while you were still alive. Are you saying that if someone blew your biological head off with a shotgun, you’d still be alive?!” Robin didn’t even blink: “I’d say that I just got smaller.” … I’d like to think that Robin’s an outlier among cryonics advocates, but in my experience, he’s perfectly typical. Fascination with technology crowds out not just philosophy of mind, but common sense.

Hanson responded with an articulate explanation of causal functionalism and the illusory quality of the mind/matter distinction:

Bryan, you are the sum of your parts and their relations. We know where you are and what you are made of; you are in your head, and you are made out of the signals that your brain cells send each other. Humans evolved to think differently about minds versus other stuff, and while that is a useful category of thought, really we can see that minds are made out of the same parts, just arranged differently. Yes, you “feel,” but that just tells you that stuff feels, it doesn’t say you are made of anything besides the stuff you see around and inside you.

Although the argument may seem to be about cryonics on the surface, it is really about the viability of uploading.

Filed under: philosophy 10 Comments
28Nov/0932

On Gardner’s Multiple Intelligences

As somewhat of an aside, Mr. Lynch criticized my critique of Gardner's theory of "multiple intelligences" as "irreverent". This is extremely unfair. All I said was that his theory is "something that doesn’t stand up to scientific scrutiny." I criticize an ad hoc, unscientific theory that has practically no empirical evidence to support it, and the popular appeal of which derives entirely from its egalitarian and inclusive political flavor, and get called irreverent.

Calling Gardner's theory of multiple intelligences unscientific is not even nearly the most irreverent thing I've said, by a long shot. It shouldn't even be considered irreverent, period. Theories of this sort, which have great popular appeal to the public and practically zero appeal to cognitive psychologists, should be regarded as guilty before proven innocent. Skepticism should be our default mode. Rain on as many unscientific parades as you can.

Filed under: intelligence, IQ 32 Comments
28Nov/0911

Dudley Lynch on the Singularity

Dudley Lynch, a self-described "non-scientific observer of what's being said and written about The Singularity at the moment", has written up an article on the Singularity. Conclusion: "I suspect it's still going to be awhile before anyone has an idea about The Singularity worth keeping."

I get a cameo in his write-up:

Michael Anissimov of the Singularity Institute for Artificial Intelligence and one of the movement's most articulate voices, continues to warn that "a singleton, a Maximillian, an unrivaled superintelligence, a transcending upload--you name it" could arrive very quickly and covertly.

Let me add a qualification to that. I do not think that such an entity could arrive quickly and covertly starting from today as a reference point, unless there are extremely well-funded secret projects that have already been working with brilliant researchers and theoreticians for maybe a decade or more (not likely at all). The point I keep making is just that an entity could go quickly from slightly human-surpassing intelligence to superintelligence, a concept known as a "hard takeoff". To get from here to slightly human-surpassing intelligence could take a while, probably more than 10 years but less than 40 (but who knows), and a project with an annual budget in the millions (maybe tens of millions but probably not hundreds of millions, is my guess). The brain is not magic and we are learning a tremendous amount about it all the time.

I especially stress this point with respect to AI. Even "merely" human-equivalent AI would have a tremendous number of advantages over human thinkers -- the ability to copy itself, absorb information more readily, customize and overclock its cognitive modules, design new cognitive modules from scratch, accelerate its thinking speed, avoid the empirically demonstrated biases in reasoning that afflict all humans, explore the entire state space of cognitive features that evolution didn't think of, blend together deliberative and autonomous cognitive processes, create multiple spheres of attention, and much more. Many of these features are listed in part 3 of "Levels of Organization in General Intelligence", a Singularity Institute paper.

When us Singularitarians say that an intelligence could potentially bootstrap itself very rapidly from just-barely-smarter-than-human to much-much-smarter-than-human relatively quickly, our reasons aren't "magic" or "it sounds cool". We have scientific and rational reasons, it's just that they don't fit into soundbites, and there are few people articulate enough to present the arguments in an accessible way.

I don't personally buy into Kurzweil's 2029 date -- it's very speculative. The key point is that intelligence operates based on principles and rules that will eventually be reverse-engineered, and once we understand those principles, we'll have the ability to "teach a rock to think", to paraphrase Michael Vassar. The ability to teach a rock to think would be no small thing -- it could transform the world practically overnight.

Mr. Lynch, here are two ideas about the Singularity worth keeping -- one, that artificial intelligences will not behave anthropomorphically, and two, advanced artificial intelligence will be a risk even if we do not program them malignly.

28Nov/097

Audio and Video of “There’s Plenty of Room at the Bottom”

Audio and video of Richard Feynman's classic "There's Plenty of Room at the Bottom" lecture (1959), which presented the vision of molecular nanotechnology for the first time, is available from Photosynthesis.com, an audio site. There are other archival recordings available, including complete audio and video from the 4th Foresight Conference on Molecular Nanotechnology, held in 1995. Apple Computer was a key sponsor of the conference.

Back then, it seems to me that a lot of people thought that molecular nanotech would be closer by now (I remember hearing people say "about 20 years", so roughly 2015), but they were obviously wrong. My guess is that the innovation and economic activity in the tech sector around that time made them overoptimistic about progress in general.

Filed under: nanotechnology 7 Comments
28Nov/094

Hanson: Make More Than GPA

Robin Hanson on how students are too obsessed with GPA and should instead focus on original, independent research:

Students seem overly obsessed with grades and organized activities, both relative to standardized tests and to what I'd most recommend: doing something original. You don’t have to step very far outside scheduled classes and clubs to start to see how very different the world is when you have to organize it yourself.

For example, if you try to study a subject in depth without following a textbook or review, you'll have to decide for yourself which sources seem how relevant to your topic. If you try to add something to the subject you'll have to decide what changes are how feasible and interesting. Doing these may feel awkward at first, but they will be very useful skills later in life. Similar skills come from writing your own game or starting your own business or composing your own album.

Along with many other things that Professor Hanson says, this sort of thing should be obvious, but neglecting it is nearly universal. How come so many of the "smart people" we all know are so focused on activities organized for them by other people?

Filed under: random 4 Comments
25Nov/0925

“Futurisms”: Anti-Transhumanist Intellectuals

Futurisms, the anti-transhumanist blog over at The New Atlantis, has been posting regularly with decent content. In the blogosphere, that can be hard to come by.

They posted Roger Holzberg's "Saying no to aging will require a bold gesture from each of us" image under a post of the title "transhumanist resentment watch", seemingly expressing confusion over who Roger was flipping off, when it is clearly the aging process that he is directing his anger towards. Here's a quote:

Beyond the strangeness of that self-loathing, the transhumanists bizarrely seem to be personifying human nature itself in order to antagonize it.

Yes, we do this from time to time.

But, we also glorify parts of humanity that we want to preserve and magnify with transhumanist technologies, like compassion, pleasure, and intelligence. Here is a list of human problems, which we are trying to antagonize and eliminate.

There is another post on the combative rhetoric of transhumanists, which singles out Eliezer Yudkowsky:

The worst example of this was in the stage appearances by Eliezer Yudkowsky, as I noted here and here.

Eliezer responds in the comments:

*Laughs*

Of *course* we're fighting the human condition! Bill McKibben? You think our fury is directed at Bill McKibben? What on Earth did you think we were fighting? Death and frailty, darkness and despair, all the ills to which the flesh is heir! Duly acknowledged! Thank you for asking!

Charles, the author of that post, asks "Who is it that they think they're sticking it to?" Good question. To me, personally, I either think of Mother Nature, about whom Nick Bostrom said, "Had Mother Nature been a real parent, she would have been in jail for child abuse and murder." To personify less, I more often think of evolution in general, which should be relatively easy to overthrow once we get going, as it's an unconscious process that operates slowly.

The other potential target would be God. God represents the worship of the status quo, and mass murder and punishment for trying to transcend our own limitations. God represents the endless lists of arbitrary rules found in Judaism and Islam in particular, but also Christianity. God represents the notion that the human body is inherently divine rather than an incomplete work. God represents the unfair bias towards the Holy Family (Adam's family line, which allegedly includes David and Jesus) rather than equal love towards all human beings. God represents the ethic of "do as I say, not as I do".

To quote from an h+ magazine article:

Their argument isn’t actually that death is good. Their argument is that heaven is good. All prominent anti-transhumanists — Fukuyama, Kass, McKibben -- are religious. Their sense of meaning springs from a faith that through suffering they will enter paradise after they are dead. If a bunch of nonbelievers creates a real deathless paradise here in reality, it will ruin that fantasy. It will be like when all the bad kids on your block get better presents from Santa. To work so gleefully for immortality and cessation of pain is to thumb your nose at ancient sources of meaning. Success will demonstrate that such deep sources of meaning are not eternal, but technical solvable problems. That’s a real faith-shaker.

These guys want the same damn thing we do, just that they think they can get it through magic, and we think we actually have to achieve it ourselves.

Filed under: transhumanism 25 Comments
24Nov/0926

Henry Markram of EPFL’s Blue Brain Project: IBM’s Cat Brain Claim is a “HOAX”

Over at Next Big Future, BoingBoing, and many other venues, Henry Markram of the EPFL's Blue Brain Project has a comment up on the recent IBM cat brain simulation announcement.

IBM's claim is a HOAX.

This is a mega public relations stunt - a clear case of scientific deception of the public. These simulations do not even come close to the complexity of an ant, let alone that of a cat. IBM allows Mohda to mislead the public into believing that they have simulated a brain with the complexity of a cat - sheer nonsense.

Here are the scientific reasons why it is a hoax:

(Read them.)

He also sent a letter to IBM's CTO and CCd the media.

New Zealand PC World has an article that summarizes some of the points.

IBM responded by issuing a statement:

IBM stands by the scientific integrity of the announcement on cognitive computing led by IBM in collaboration with Stanford University, University of Wisconsin-Madison, Cornell University, Columbia University Medical Center, University of California-Merced and Lawrence Berkeley National Laboratory," the IBM statement reads. "The cognitive computing team has achieved two milestones that indicate the feasibility of building a computing system that requires much less energy than today's supercomputers, and is modeled after the cognition of the brain. This is important interdisciplinary exploratory research bringing together computational neuroscience, microelectronics and neuroanatomy, and this work has been commented on favorably by others in the scientific community.

If Markram is telling the truth in his allegations (I don't know about all of them because many of the details he mentions are not addressed in the IBM paper, but some of the claims seem obviously true to me), then IBM has lost all credibility.

IBM says that it is "modeled after the cognition of the brain", but what the hell does that mean? Point neurons, like Markram says, most likely. It also seems like Modha's web page and the text of the press release are explicitly designed to further the delusion that they have created a cat-complexity brain.

"Whole Brain Emulation: a Roadmap" has a more realistic and comprehensive estimate of the complexity required to simulate a brain.

Filed under: AI 26 Comments
24Nov/092

Joe Forgas: “When Sad is Better than Happy: Negative Affect Can Improve the Quality and Effectiveness of Persuasive Messages and Social Influence Strategies”

When popular science writers actually reference scientific literature, good things can happen, like this article by Mark Peters: "A Happy Writer Is a Lousy Writer?"

Transhumanists tremendously shocked and dissatisfied with the current state of the world relative to other possibilities can tap into this to improve their writing. Twinkly-eyed techno-utopian transhumanists can continue to produce poor writing.

Filed under: random 2 Comments
24Nov/0922

Greg Fish: Against Causal Functionalism

Greg Fish, a science writer with a popular blog who contributes to places like Business Week and Discovery News, has lately been advancing a Searleian criticism of causal functionalism. For instance, here and here. Here is an excerpt from the latter:

A Computer Brain is Still Just Code

In the future, if we model an entire brain in real time on the level of every neuron, every signal, and every burst of the neurotransmitter, we’ll just end up with a very complex visualization controlled by a complex set of routines and subroutines.

These models could help neurosurgeons by mimicking what would happen during novel brain surgery, or provide ideas for neuroscientists, but they’re not going to become alive or self aware since as far as a computer is concerned, they live as millions of lines of code based on a multitude of formulas and rules. The real chemistry that makes our brains work will be locked in our heads, far away from the circuitry trying to reproduce its results.

Now, if we built a new generation of computers using organic components, the simulations we could run could have some very interesting results.

On his blog, he says:

The actual chemical reactions that decide on an action or think through a problem don’t take place and the biological wiring that’s the crucial part of how the whole process takes place isn’t there, just a statistical approximation of it.

This is just another version of vitalism. Computers lack the "vital spark" necessary to create the "soul", even if they implement the functions of intelligence and self-reflection even more effectively than the biological entity that inspired their creation. But those functions are what create intelligence and self-reflection, not magic chemistry-that-can-never-ever-be-simulated-even-in-principle.

There is quite a bit of fuzziness in chemical reactions themselves, and not all this fuzziness is necessary to implement intelligence or "self-awareness".

Say we have a molecular dynamics simulation of the brain in complete and utter detail. It behaves exactly the same as the intelligence that it is "simulating". You can say "it's just a simulation", but it can achieve all the same things that the original can, including be your friend or even possibly kill you. In such circumstances, "it's just a simulation" is quite pointless hairsplitting. Certainly, some atomic configurations are conscious and others are not, but there is no vital force that biological molecules possess that high-resolution simulations of those biological molecules would not also possess.

If it walks like a duck, and quacks like a duck, it's still possible that it's not a duck, but if it has a perfect emulation of a duck brain and can walk around in a duck body, then it may as well be a duck.

Filed under: AI, philosophy 22 Comments