Lawrence Lessig Abandons Transparency Fundamentalism, Finally

Oh my god… unlimited transparency, openness, and “democratization” are not automatically good things? That’s the conclusion that Lawrence Lessig seems to have finally come to, years and years late, in a recent article at The New Republic. Here’s a quote:

How could anyone be against transparency? Its virtues and its utilities seem so crushingly obvious. But I have increasingly come to worry that there is an error at the core of this unquestioned goodness. We are not thinking critically enough about where and when transparency works, and where and when it may lead to confusion, or to worse. And I fear that the inevitable success of this movement –“- if pursued alone, without any sensitivity to the full complexity of the idea of perfect openness“ — will inspire not reform, but disgust. The “naked transparency movement,” as I will call it here, is not going to inspire change. It will simply push any faith in our political system over the cliff.

You have “come to worry”, only now? At the end of …

Read More

The Miami Herald’s Nutty Nanotech Intro: ” Tiny Technology May Yield Major Finds — and Possible Perils”

See here for a humorous and somewhat sensationalistic introduction to nanotechnology.

Point 1: Sci-fi authors should not be quoted in an ostensibly scientific introduction to anything unless their statements have been endorsed by experts.

Point 2: Grey goo is a small issue. This is Nanotechnology 101. Why don’t they teach this in nanotechnology classes in schools?

Point 3: Quote sources. They say: “Some fear toxicity to human lungs as lethal as that from asbestos. Others fear mini-weapons of mass destruction in the hands of terrorists.” Who are “some” and “others”? Why do reporters obfuscate or just make things up to sell a story?

In any case, I hope that there are continued investigations into the possible toxic properties of nanoparticles, but sensationalistic comparisons to asbestos are probably not helpful. As for grey goo, numerous articles have been published since 2003 and earlier debunking this risk, especially from the Center for Responsible Nanotechnology. Any scientists …

Read More

Mass Production of Artificial Skin Within Two Years?

There’s a news item on work towards the mass production of artificial skin from May that I missed. The Fraunhofer Institute for Interfacial Engineering and Biotechnology is working on the process. They expect to finish their “skin factory” about two years from when the article was published, so approximately May 2011. Good luck! Artificial skin could have a particularly pleasant utilitarian impact because it might free animals from some forms of chemical testing.

Read More

Courtney Boyd Myers: “The Transhumanists Arrive”

Courtney Boyd Myers has another short article up about the Summit, this time at Forbes. It is interesting how often she uses the T word given that, as far as I can tell, the word appeared nowhere in our program and wasn’t invoked onstage. (I might have missed it.) The use of the word must, then, be due to independent inference, putting the pieces together and realizing the obviousness that many of the people attending the Summit have a transhumanist bent or can be described as transhumanists (in some cases, whether they like it or not.) It’s unusual, though, how many transhumanists are still afraid of calling themselves transhumanists even though they are de facto transhumanists. It reminds me of the phenomenon of obvious goths that insist they are not goths. They keep insisting, yet no one is fooled.

Read More

This is Your Brain on Cryonics

While we’re on the topic of cryonics, I am reminded of a letter I wrote to Alcor a while back:


I’m a cryonicist and life extension advocate. To help promote the idea of cryonics, I think it would be a good idea to have available on the Internet micrograph images of frozen and unfrozen brain tissue, to show the difference. Do you have any available, or know where I could get some?

Thank you, Michael

Dr. Brian Wowk kindly responded:

Hi Michael. There are lots of cryopreserved brain micrographs on the Alcor website. Some of them are after rewarming, and others were obtained actually in the cryopreserved state by a technique called freeze-substitution.

Regards, Brian

From the quotes page, here is an image of vitrified hippocampus:

Read More

Joshua Fox Answers 10 Questions

Josh Fox steps forward as the first SIAI supporter to answer Popular Mechanics10 questions. Regarding the notion of missing the Singularity, a quote by Dan Clemmensen comes to mind:

Sorry Arthur, but I’d guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon.

A superintelligent AI might actually choose to announce itself much more subtly — we don’t know. I just doubt it’s something we’d miss.

Read More

Survivalist References

Since Popular Mechanics is focusing on survivalism, now is a good time to reference Nuclear War Survival Skills and Patriots. The latter was written by a right-wing Christian bigot, so apply salt as necessary, but many of the logistical points address what would be necessary to survive if there is a nuclear war or a hydrogen bomb is detonated over the US (EMP, lol!) It would be hard. In fact, I know it’s impossible for me to both maximize my effectiveness to the Singularity and care too much about survivalism. Survivalism is important to consider, however, because the fact is that human society and civilization is a delicate thing. Food and water go away, and you have millions of psychos — fast.

For a real underground survivalist text, see the The Killer Karavans by Kurt Saxon. Again, written by a bigot, but still, very …

Read More

Popular Mechanics on Singularity Summit 2009

Popular Mechanics has coverage of Singularity Summit. Slightly weird coverage, but, uh, whatever. Being exposed to the idea of the Singularity since 2001, I consider it normal and boring (even annoying) rather than weird or fantastic. Homo sapiens surpassed lesser intelligences — why is it such a shocker than Homo sapiens will eventually be surpassed intellectually? I guess it is the hard takeoff version that inspires evaluations of weirdness. Well, if it were physically impossible to develop manufacturing and robotics technology vastly more powerful than human beings, then there would be no hard takeoff, but that doesn’t seem to be the case. Humans are midway on the “Great Chain of Being”, which makes sense given that we should expect ourselves to be “typical” intelligences for anthropic reasons.


You can see how believable and even plausible a technological singularity seems once you take a few things for granted. If it were possible to improve your memory with a digital device, for example, then everybody would want one, because not having such a device would …

Read More

Singularity Skeptic at

Here is a skeptical view on the Singularity by “fledgling otaku”, from about a year and a half ago.

The article actually does a nice job of debunking emergent AI from neural nets. Except, it also makes the mistake of assuming an AI must be just like a human brain, which is like thinking a plane ought to be just like a bird. The author even admits a partly-religious view of the human mind.

Then, a major portion of the critique expresses discomfort with uploading. I suspect that at least 50% of humanity will require actually experiencing uploading before they feel comfortable with it. This is reasonable.

Read More

Raeflin On the Utility of Game-Changing Technologies

Raeflin has some thoughts on the comparative utility of game-changing technologies.

The point about Kurzweil is particularly important — Kurzweil is not working directly towards AGI, though he has helped sponsor AGI conferences. However, SIAI Director of Research Ben Goertzel, Dr. Itamar Arel, Associate Professor of Electrical Engineering and Computer Science at The University of Tennessee, and programmer Scott Livingston will be meeting soon with other researchers to formulate a roadmap to AGI. This was mentioned in their respective talks at Singularity Summit 2009.

SIAI also has an internal research project that has been ongoing for almost a decade, which strongly features attempted improvements on decision theory, including timeless decision theory and reflective decision theory. Developing a mathematically consistent improved decision theory is essential for AGI because the only alternative is to throw a bunch of heuristics together, like Eurisko, which isn’t …

Read More