Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

17May/106

What Does a Buckyball Undergoing Unimolecular Disassociation by Use of Extremely High Levels of Vibrational Excitation Look Like?

How about a C60/C240 collision at 300eV?

H/t Machine Phase.

16May/104

Eyes Flashing, Robot Presides at Japanese Wedding

Speaking of technology intruding into the traditional role of religious authorities...

Coverage and a gallery can be found at AP.

Filed under: robotics 4 Comments
16May/1067

A Christian Perspective on the Singularity Movement

This was published late last year at Metanexus by the founder of the foundation, William Grassie: "Millennialism at the Singularity: Reflections on Metaphors, Meanings, and the Limits of Exponential Logic". Here's a quote to pique interest:

This is a very technical discussion in computer science, but the short of it is that many problems simply don't compute. There are also other theoretical and practical limits to computation. These are called intractable problems because they "require hopelessly large amounts of time even for relatively small inputs." Computer encryption depends on this second fact. It may be that the genome, in dynamic relationship with proteins and its environment, is in some sense "encrypted." It may be that the mind-brain is similarly "encrypted." In which case, we will never be able to fully understand, let alone reliably control life and mind no matter how exponentially our scientific knowledge grows nor how fast technological know-how accelerates

Here's another quote:

Of course, anytime we talk about the future, our hopes or our fears, we are in the realm of religions.

...

Nowhere is this religious dimension of the Singularity movement more readily apparent than in their uncritical enthusiasm for life extension research, as if this was an obvious good.

Wanting to live longer than 90 years = religion.

When theists call the Singularity movement "religious", they are essentially saying, "Oh no, this scientifically-informed philosophy is intruding on our traditional turf!"

For the tl;dr version, see this quote from Sister Miriam Godwinson:

"Men in their arrogance claim to understand the nature of creation, and devise elaborate theories to describe its behavior. But always they discover in the end that God was quite a bit more clever than they thought."
Sister Miriam Godwinson, "We must Dissent"

Filed under: singularity 67 Comments
15May/109

Dangers of Molecular Nanotechnology, Again

Over at IEET, Jamais Cascio and Mike Treder essentially argue that the future will be slow/boring, or rather, seem slow and boring because people will get used to advances as quickly as they occur. I heartily disagree. There are at least three probable events which could make the future seem traumatic, broken, out-of-control, and not slow by anyone's standards. These three events include 1) a Third World War or atmospheric EMP detonation event, 2) an MNT revolution with accompanying arms races, and 3) superintelligence. In response to Jamais' post, I commented:

I disagree. I don't think that Jamais understands how abrupt an MNT revolution could be once the first nanofactory is built, or how abrupt a hard takeoff could be once a human-equivalent artificial intelligence is created.

Read Nanosystems, then "Design of a Primitive Nanofactory", and look where nanotechnology is today.

For AI, you can do simple math that shows once an AI can earn enough money to pay for its own upkeep and then some, it would quickly gain the ability to take over most of the world economy.

Have Giulio or Jamais read "Design of a Primitive Nanofactory" or Nanosystems?

Knowledge of where we are today in nanotechnology, plus Nanosystems, plus "Design of a Primitive Nanofactory", equals scary.

Where we are today: basic molecular assembly lines
The most important breakthrough: a reprogrammable universal assembler
Shortly thereafter: a basic nanofactory
Shortly thereafter: every nation with nanofactory technology magnifies its manufacturing potential by a factor of hundreds or more.

Chris Phoenix gets it. Jurgen Altmann gets it. Mark Gubrud gets it. Thomas Vandermolen gets it. Eric Drexler seems to have gotten it a long time ago. Michio Kaku, Annalee Newitz, and many others have called molecular nanotechnology "the next Industrial Revolution".

When will others get it? Here's a quote from the CRN page on the dangers of molecular nanotechnology:

Molecular manufacturing raises the possibility of horrifically effective weapons. As an example, the smallest insect is about 200 microns; this creates a plausible size estimate for a nanotech-built antipersonnel weapon capable of seeking and injecting toxin into unprotected humans. The human lethal dose of botulism toxin is about 100 nanograms, or about 1/100 the volume of the weapon. As many as 50 billion toxin-carrying devices--theoretically enough to kill every human on earth--could be packed into a single suitcase. Guns of all sizes would be far more powerful, and their bullets could be self-guided. Aerospace hardware would be far lighter and higher performance; built with minimal or no metal, it would be much harder to spot on radar. Embedded computers would allow remote activation of any weapon, and more compact power handling would allow greatly improved robotics. These ideas barely scratch the surface of what's possible.

Will weapons like these in the hands of every backwater terrorist and militia lead to a future that is "slow" or "boring? It could lead to a future where numerous major cities become essentially uninhabitable.

Here's a potentially illuminating quote:

"Revolutions are cruel precisely because they move too fast for those whom they strike."
Jacob Bronowski

14May/1012

John Horgan Attacks the “Artificial Brain” Projects

John Horgan, the eminent science journalist who previously called me a cultist, is back on track with a guest blogging article at ScientificAmerican.com titled, "Artificial brains are imminent...not!" And hey, guess what -- I totally agree with him. (Especially as far as the "cat brain" is concerned.) If AI comes about within the next two decades, I wager that it will be because we discovered the operating principles of intelligence and instantiated them in a machine, not because we copied a brain.

(Additional note: Markram has claimed that he has simulated a neocortical column with biologically realistic fidelity, but without demonstrating it more thoroughly, there is no way we can know if this claim is true. A commenter, Jordan, pointed out that Horgan misrepresented Markram's attitude.)

Here's a big quote from John's post:

Sejnowski is a very smart guy, whom I've interviewed several times over the years about the mysteries of the brain. But I respectfully--hell, disrespectfully, Terry can take it--disagree with his prediction that artificial brains are imminent. Sejnowski's own article shows how implausible his prediction is. He describes two projects--both software programs running on powerful supercomputers--that represent the state of the art in brain simulation. On the one hand, you have the "cat brain" constructed by IBM researcher Dharmendra Modha; his simulation contains about as many neurons as a cat's brain does organized into roughly the same architecture. On the other hand, you have the Blue Brain Project of Henry Markram, a neuroscientist at the Ecole Polytechnique Federale de Lausanne.

Markram's simulation contains neurons and synaptic connections that are much more detailed than those in Modha's program. Markram recently bashed Modha for "mass deception," arguing that Modha's neurons and synapses are so simple that they don't deserve to be called simulations. Modha's program is "light years away from a cat brain, not even close to an ant's brain in complexity," Markram complained.

Talk about the pot calling the kettle black. Last year Markram stated, "It is not impossible to build a human brain and we can do it in 10 years." If Modha's simulation is "light years" away from reality, so is Markram's. Neither program includes "sensory inputs or motor outputs," Sejnowski points out, and their neural-signaling patterns resemble those of brains sleeping or undergoing an epileptic seizure. In other words, neither Modha nor Markram can mimic even the simplest operations of a healthy, awake, embodied brain.

The simulations of Modha and Markram are about as brain-like as one of those plastic brains that neuroscientists like to keep on their desks. The plastic brain has all the parts that a real brain does, it's roughly the same color and it has about as many molecules in it. OK, say optimists, the plastic brain doesn't actually perceive, emote, plan or decide, but don't be so critical! Give the researchers time! Another analogy: Current brain simulations resemble the "planes" and "radios" that Melanesian cargo-cult tribes built out of palm fronds, coral and coconut shells after being occupied by Japanese and American troops during World War II. "Brains" that can't think are like "planes" that can't fly.

Yes -- especially with respect to IBM. (As I added in above, I don't know all the facts about Markram's simulation because he hasn't demonstrated it.) Earlier, on May 3rd, when I criticized the IBM "cat brain" nonsense here on this blog, an IBM employee in the comments went ballistic with ad hominem attacks, completely avoiding any discussion of the science. Quick point here about John's claim that the "cat brain" is organized into "roughly the same architecture"... not really. Even the paper doesn't claim this.

There are many important reasons why this issue is worth addressing:

1) Overblown claims today leads to public disillusionment tomorrow. For people who care about the future of real artificial intelligence, like me, there is already enough disillusionment, and I'm not going to stand around while another cycle of overblown AI promises occurs.

2) Simple dishonesty. The people behind the projects know that the "brains" lack the low-level structure that makes even the most rudimentary forms of thinking possible.

3) Scientific support. Even though I'm not a scientist myself, I know that any neuroscientist exposed to the details of these simulations would realize that they are not even vaguely close to biological brains.

4) I believe that the future of AI is in uncovering and implementing the operating principles of intelligence rather than copying the brain, like how the Wright Brothers uncovered and implemented the operating principles of flight rather than copying a pigeon.

5) If we can't question this obvious farce now, then what will we do years down the road, when more subtle AI deceptions are foisted upon science-ignorant journalists and the public?

In his article, John Horgan even brings a smile to my face when he shows that he is not an ideologue:

Go back a decade or two--or five or six--and you will find artificial intelligence pioneers like Marvin Minsky and Herbert Simon proclaiming, because of exciting advances in brain and computer science: Artificial brains are coming! They're going to save us! Or destroy us! Someday, these prophecies may come true, but there is no reason to believe them now.

Someday, these prophecies might come true... did you hear that? Sweet validation. This statement is somewhat consistent with my research that shows the number of academics who have published papers or books arguing that AI is impossible in principle appears to be roughly zero. (What Computers Can't Do is the only one that comes to mind, but I haven't read it.) Mr. Horgan is not an academic, but he is a smart guy, so I would expect him to acknowledge the possibility of AI in principle, while not being duped by claims that neural networks that display oscillations represent a major advance in whole brain emulation.

The assumption that each neuron is an on-off switch that performs exactly 200 operations per second is wrong on so many levels. The answer is more subtle -- there could be single neurons that require tens of thousands of operations per second to simulate, or perhaps entire neural aggregations that can be modeled with just one operation per second. The truth will be complex, nothing like the superficial calculation, "100 billion neurons times 200 spikes per second equals 10^17 ops/sec of computing power equals the human brain". In the comments, Jordan pointed out that Markram uses an entire CPU for each neuron, so that may be on the right track, but we still can't confirm because nothing has been released.

Note that my skepticism of these brain-building projects is inconsistent with the popular misconception of Singularitarians as insufferable nerds awaiting a Techno Rapture. So is my skepticism of claims of accelerating technological progress.

Filed under: AI 12 Comments
14May/104

Nature: “A proximity-based programmable DNA nanoscale assembly line”

io9 has coverage of Nadrian Seeman's latest work in nanotechnology: the first nanoscale assembly line! This is big news. If you were at Singularity Summit 2009 back in October and listening very carefully, you might have heard Seeman mention this device seven months in advance of its formal announcement! Now that's foresight.

The full Nature article describing the device is here.

Filed under: nanotechnology 4 Comments
14May/101

Survey: Hiding Risks Can Hurt Public Support for Nanotechnology

Here's an interesting news item from Eurekalert:

A new national survey on public attitudes toward medical applications and physical enhancements that rely on nanotechnology shows that support for the technology increases when the public is informed of the technology's risks as well as its benefits – at least among those people who have heard of nanotechnology. The survey, which was conducted by researchers at North Carolina State University and Arizona State University (ASU), also found that discussing risks decreased support among those people who had never previously heard of nanotechnology – but not by much.

"The survey suggests that researchers, industries and policymakers should not be afraid to display the risks as well as the benefits of nanotechnology," says Dr. Michael Cobb, an associate professor of political science at NC State who conducted the survey. "We found that when people know something about nanotechnologies for human enhancement, they are more supportive of it when they are presented with balanced information about its risks and benefits."

The survey was conducted by Cobb in collaboration with Drs. Clark Miller and Sean Hays of ASU, and was funded by the Center for Nanotechnology in Society at ASU.

However, talking about risks did not boost support among all segments of the population. Those who had never heard of nanotechnology prior to the survey were slightly less supportive when told of its potential risks.

In addition to asking participants how much they supported the use of nanotechnology for human enhancements, they were also asked how beneficial and risky they thought these technologies would be, whether they were worried about not getting access to them, and who should pay for them – health insurance companies or individuals paying out-of-pocket. The potential enhancements addressed in the survey run the gamut from advanced cancer treatments to bionic limbs designed to impart greater physical strength.

If you are someone who writes or speaks on the topic of nanotechnology, this means that you shouldn't be afraid to discuss the risks. In fact, mentioning the risks should be part of your default spiel. Engines of Creation was not afraid to discuss some of the risks. The Center for Responsible Nanotechnology, when it was more active, had a crucial role in making the risks of nanotechnology more widely known, but the vast majority of contemporary organizations and publications that discuss nanotechnology shy away from the immense risks.

I've previously written at length about the dangers of advanced nanotechnology, and frequently recommend the book Military Nanotechnology as a guide to some of these risks. Essential essays or pages include "Molecular Nanotechnology and the World System" by Tom McCarthy, "Nanotechnology and International Security" by Mark Gubrud, "Military, Arms Control, and Security Aspects of Nanotechnology" by Altmann and Gubrud, CRN's dangers page, and my page enumerating additional dangers.

Next time you're in the audience at a talk or see a blog post extolling the benefits of nanotechnology (especially molecular nanotechnology), consider making a comment that you'd like to see more thought on the risks. I believe that some of the purveyors of molecular nanotechnology are actively avoiding discussing its grave potential risks.

Filed under: nanotechnology 1 Comment
13May/102

Gary Marcus at Singularity Summit 2009: The Fallibility and Improvability of the Human Mind

Gary Marcus at Singularity Summit 2009 -- The Fallibility and Improvability of the Human Mind from Singularity Institute on Vimeo.

Gary Marcus Professor of Psychology at New York University, director of the NYU Center for Child Language, and author of The Birth of the Mind and Kludge.

12May/103

Dr. Brian Wowk: Suspended Animation by Vitrification

This is from the Alcor video gallery.

Filed under: cryonics 3 Comments
10May/1080

A Little Perspective from the Deep Past

Check out the short post by Reason at Fight Aging, "A Little Perspective from the Deep Past":

The growth in health, welfare, and wealth of 18th century Europe was a glittering spire when set against any measure of the grand history of humanity. A pinnacle set abruptly at the end of a very long, very gentle upward slope.

Continue.

10May/105

Professor John McGinnis on Friendly AI at the Northwestern University Law Colloquy

Found via a Google Alert for "Friendly AI" on Concurring Opinions, a legal scholarship blog:

Professor John McGinnis discusses a recent major media interest, Artificial Intelligence, and what the best government response to its development should be. He argues that, rather than prohibition or heavy regulation, the government should support the development of so-called "friendly AI," to both prevent potential threats and develop the many benefits of it.

Here is the essay, and a quote from the beginning:

These (New York Times) articles encapsulate the twin fears about AI that may impel regulation in this area--the existential dread of machines that become uncontrollable by humans and the political anxiety about machines' destructive power on a revolutionized battlefield. Both fears are overblown. The existential fear is based on the mistaken notion that strong artificial intelligence will necessarily reflect human malevolence.

No. The "existential fear" is based on the legitimate notion that universal drives towards acquiring greater resources and control will emerge in AIs as subgoals of an extremely wide range of possible supergoals. No matter what your goal is, greater power can help facilitate that goal. Only when not acquiring unlimited power is an explicit part of the supergoal can we ever hope that self-improving AIs will be self-limiting. Otherwise, they will tend to expand without limit. This analysis is not based on anthropomorphic thinking, but the cold logic of subgoal formation in a cognitive environment with much more flexibility than the Homo sapiens mind.

There may be a bit of challenge here on McGinnis' end, because the concepts detailing the risk are drawn from decision theory and cognitive science, which lawyers tend not to be familiar with.

Filed under: friendly ai 5 Comments
6May/1040

Charles Lindbergh: Early Transhumanist

I somehow missed this when it was news in 2008. Apparently Charles Lindbergh wanted to live forever and become a cyborg. Here's the beginning of the story, as told by BBC in "Lindbergh's deranged quest for immortality":

In the 1930s, after his historic flight over the Atlantic, Lindbergh hooked up with Alexis Carrel, a brilliant surgeon born in France but who worked in a laboratory at the Rockefeller Institute in Manhattan. Carrel - who was a mystic as well as a scientist - had already won a Nobel Prize for his pioneering work on the transplantation of blood vessels. But his real dream was a future in which the human body would become, in Friedman's words, "a machine with constantly reparable or replaceable parts".

This is where Lindbergh entered the frame. Carrel hoped that his own scientific nous combined with Lindbergh's machine-making proficiency (Lindbergh had, after all, already helped design a plane that flew non-stop to Paris) would make his fantasy about immortal machine-enabled human beings a reality.

Lindbergh also admired the Nazis, and Carrel was an old-school eugenicist. At this point, transhumanists' critics will say, "Lindbergh was a transhumanist, and admired the Nazis, therefore all transhumanists admire Nazis, no matter what they say. Nyah nyah nyah!"

The truth is more along the lines described by Khannea Suntzu in a recent blog post:

Nevertheless, I'd be the first to admit I have "neo-eugenic" sympathies, but not in the manner described above. I repeat, nowhere near the eugenic ideals held by fascists. Please contrast the historical term eugenical movement with if you can spot the differences.

Parents should be able to self-determine their own children, free to imbue them with ability, or free to not imbue them with ability.
Society should intervene if parents abuse or neglect their children.
All people are morally obliged to care for the disabled and vulnerable and provide them with a humane and dignified existence.
Withholding safe treatments that cure heredetary genetic disabilities is a form of neglect.

Here is the somewhat longer version.

Continuing in her description of transhumanist attitudes towards our increasing genetic choices, she writes, in response to a critic:

The old eugenical movement is dead, and transhumanism is not a continuation of that historical monstrosity. As in -- not by a long shot. The author of the article, "LVB" can beat his chest for all he likes, repeating it over and over doesn't make it any less lying. The old eugenicals were racists who regarded other races and the disabled as an undesirable subspecies of humanity that needed expedient extermination. Transhumanism doesn't make much statements about races -- it just charts human imperfections and proposes how to improve them, in a climate of maximum personal freedom. Sort of like granting everyone personal freedom to have or not to have smallpox. Comparing transhumanism and eugenics in this regard is just comparing dolphins and fish -- since they both swim in the sea. It's demonization and the rather familiar "guilt by association" shtick LVB has been using like a jackhammer so far in this article.

More bad form!

My personal position is separate from all that -- as a disabled person myself I often would have preferred my parents had been able to prenatally screen my genes before actual conception, remove any of the serious disorders to plague my life, and had me born "fixed". I regard the desire to give prospective parents the freedom (and hopefully wisdom) to have sound, safe and tested therapies remove clearly pathological qualities from the genome off their children. I'd even go beyond that and would say that not using these treatments is a form of severe neglect and child abuse in extreme cases. I'd say a parent who knows he or she has an inheritable genetic disease and still breeds without consideration for the life of the child is a pretty awful parent and an awful human being. Plus he is an irresponsible citizen that knowingly saddles society with significant costs in terms of care. Parents rarely pay the full costs involved whenever a child is born with a severe birth defect and more often than not society is left holding the bill. Someone who takes that risk is doing something really wrong. We should always care for those disabled that are born, but I'd rather that they had all been born as healthy as brangelina, and then some.

There may be quite a bit of debate in the future about exactly which characteristics are qualified as "genetic disease" and which are just "unique traits", but just because the future poses new questions does not mean we should run away from it.

Filed under: transhumanism 40 Comments