A Christian Perspective on the Singularity Movement

This was published late last year at Metanexus by the founder of the foundation, William Grassie: “Millennialism at the Singularity: Reflections on Metaphors, Meanings, and the Limits of Exponential Logic”. Here’s a quote to pique interest:

This is a very technical discussion in computer science, but the short of it is that many problems simply don’t compute. There are also other theoretical and practical limits to computation. These are called intractable problems because they “require hopelessly large amounts of time even for relatively small inputs.” Computer encryption depends on this second fact. It may be that the genome, in dynamic relationship with proteins and its environment, is in some sense “encrypted.” It may be that the mind-brain is similarly “encrypted.” In which case, we will never be able to fully understand, let alone reliably control life and mind no matter how exponentially our scientific knowledge grows nor how fast technological know-how accelerates

Here’s another quote:

Of course, anytime we talk about the future, our hopes or our fears, we are in the realm of religions.

Nowhere is …

Read More

Dangers of Molecular Nanotechnology, Again

Over at IEET, Jamais Cascio and Mike Treder essentially argue that the future will be slow/boring, or rather, seem slow and boring because people will get used to advances as quickly as they occur. I heartily disagree. There are at least three probable events which could make the future seem traumatic, broken, out-of-control, and not slow by anyone’s standards. These three events include 1) a Third World War or atmospheric EMP detonation event, 2) an MNT revolution with accompanying arms races, and 3) superintelligence. In response to Jamais’ post, I commented:

I disagree. I don’t think that Jamais understands how abrupt an MNT revolution could be once the first nanofactory is built, or how abrupt a hard takeoff could be once a human-equivalent artificial intelligence is created.

Read Nanosystems, then “Design of a Primitive Nanofactory”, and look where nanotechnology is today.

For AI, you can do simple math that shows once an AI can earn enough money to pay for its own upkeep and then some, it would quickly gain …

Read More

John Horgan Attacks the “Artificial Brain” Projects

John Horgan, the eminent science journalist who previously called me a cultist, is back on track with a guest blogging article at ScientificAmerican.com titled, “Artificial brains are imminent…not!” And hey, guess what — I totally agree with him. (Especially as far as the “cat brain” is concerned.) If AI comes about within the next two decades, I wager that it will be because we discovered the operating principles of intelligence and instantiated them in a machine, not because we copied a brain.

(Additional note: Markram has claimed that he has simulated a neocortical column with biologically realistic fidelity, but without demonstrating it more thoroughly, there is no way we can know if this claim is true. A commenter, Jordan, pointed out that Horgan misrepresented Markram’s attitude.)

Here’s a big quote from John’s post:

Sejnowski is a very smart guy, whom I’ve interviewed several times over the years about the mysteries of the brain. But I respectfully–hell, disrespectfully, Terry can take it–disagree with his prediction that artificial brains are imminent. Sejnowski’s own article shows how …

Read More

Nature: “A proximity-based programmable DNA nanoscale assembly line”

io9 has coverage of Nadrian Seeman’s latest work in nanotechnology: the first nanoscale assembly line! This is big news. If you were at Singularity Summit 2009 back in October and listening very carefully, you might have heard Seeman mention this device seven months in advance of its formal announcement! Now that’s foresight.

The full Nature article describing the device is here.

Read More

Survey: Hiding Risks Can Hurt Public Support for Nanotechnology

Here’s an interesting news item from Eurekalert:

A new national survey on public attitudes toward medical applications and physical enhancements that rely on nanotechnology shows that support for the technology increases when the public is informed of the technology’s risks as well as its benefits – at least among those people who have heard of nanotechnology. The survey, which was conducted by researchers at North Carolina State University and Arizona State University (ASU), also found that discussing risks decreased support among those people who had never previously heard of nanotechnology – but not by much.

“The survey suggests that researchers, industries and policymakers should not be afraid to display the risks as well as the benefits of nanotechnology,” says Dr. Michael Cobb, an associate professor of political science at NC State who conducted the survey. “We found that when people know something about nanotechnologies for human enhancement, they are more supportive of it when they are presented with balanced information about its risks and benefits.”

The survey was conducted by Cobb in collaboration with Drs. Clark Miller and …

Read More

Professor John McGinnis on Friendly AI at the Northwestern University Law Colloquy

Found via a Google Alert for “Friendly AI” on Concurring Opinions, a legal scholarship blog:

Professor John McGinnis discusses a recent major media interest, Artificial Intelligence, and what the best government response to its development should be. He argues that, rather than prohibition or heavy regulation, the government should support the development of so-called “friendly AI,” to both prevent potential threats and develop the many benefits of it.

Here is the essay, and a quote from the beginning:

These (New York Times) articles encapsulate the twin fears about AI that may impel regulation in this area–the existential dread of machines that become uncontrollable by humans and the political anxiety about machines’ destructive power on a revolutionized battlefield. Both fears are overblown. The existential fear is based on the mistaken notion that strong artificial intelligence will necessarily reflect human malevolence.

No. The “existential fear” is based on the legitimate notion that universal drives towards acquiring greater resources and control will emerge in AIs as subgoals of an extremely wide range of possible …

Read More

Charles Lindbergh: Early Transhumanist

I somehow missed this when it was news in 2008. Apparently Charles Lindbergh wanted to live forever and become a cyborg. Here’s the beginning of the story, as told by BBC in “Lindbergh’s deranged quest for immortality”:

In the 1930s, after his historic flight over the Atlantic, Lindbergh hooked up with Alexis Carrel, a brilliant surgeon born in France but who worked in a laboratory at the Rockefeller Institute in Manhattan. Carrel – who was a mystic as well as a scientist – had already won a Nobel Prize for his pioneering work on the transplantation of blood vessels. But his real dream was a future in which the human body would become, in Friedman’s words, “a machine with constantly reparable or replaceable parts”.

This is where Lindbergh entered the frame. Carrel hoped that his own scientific nous combined with Lindbergh’s machine-making proficiency (Lindbergh had, after all, already helped design a plane that flew non-stop to Paris) would make his fantasy about immortal machine-enabled human beings a reality.

Lindbergh also admired the Nazis, and Carrel was an …

Read More