Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

14Oct/095

Lawrence Lessig Abandons Transparency Fundamentalism, Finally

Oh my god... unlimited transparency, openness, and "democratization" are not automatically good things? That's the conclusion that Lawrence Lessig seems to have finally come to, years and years late, in a recent article at The New Republic. Here's a quote:

How could anyone be against transparency? Its virtues and its utilities seem so crushingly obvious. But I have increasingly come to worry that there is an error at the core of this unquestioned goodness. We are not thinking critically enough about where and when transparency works, and where and when it may lead to confusion, or to worse. And I fear that the inevitable success of this movement --“- if pursued alone, without any sensitivity to the full complexity of the idea of perfect openness“ -- will inspire not reform, but disgust. The "naked transparency movement," as I will call it here, is not going to inspire change. It will simply push any faith in our political system over the cliff.

You have "come to worry", only now? At the end of 2009? Not years ago, when the arguments were already out there that maybe transparency should be conducted intelligently and selectively rather than applied universally and unconditionally? You are the intellectual leader of the "naked transparency movement". You'd better speak more harshly to your followers in numerous other articles, or they won't get the point.

I am slapping my forehead right now. Transparency and openness have become a cult. Corporate marketing campaigns pander to this cult relentlessly. It is the ultimate ego trip. The thinking goes like this: everything is better when the common person, namely me, can stick my fingers in every pie and contribute to every decision-making process. This way of thinking is profoundly wrong. It assumes that everyone is equally good at everything. There is a reason we have experts and specialists. Though in some domains, experts are just as good as anyone (such as clinical psychology), in many domains, expert knowledge and skills matter.

Lawrence Lessig has been the #1 promoter of the idea of "perfect openness" "without any sensitivity to the idea" for years. Now he is backpedaling. For an example of Lessig's run-amok openness obsession, see his debate with Andrew Keen, where he behaves like a rude asshole. The core of Lessig's fanbase is a culture of nerds who think that everything is better when they personally get to control part of it.

There was a moment several months ago when I remember a transhumanist blogger remarking "what about the power of crowdsourcing?", or "so much for the power of crowdsourcing", or something along those lines, when a democratic poll on some topic obviously revealed a crappy answer. It's like he was genuinely shocked that "crowdsourcing" (a silly buzzword if I ever saw one) didn't automatically lead to the best answer. Surprise! We're in an era and memetic environment where even suggesting that unlimited transparency and "crowdsourcing" aren't obviously good things is certain to generate accusations of elitism and even Luddism.

One argument goes like this: the Internet has been making things more open, and the Internet has made a lot of things better. Therefore, more openness is always better, and both openness and betterness will continue indefinitely and inevitably. This (mistaken) way of thinking is called Whig history. The truth is that the Internet has made certain things more open, and made certain things better, but the correlation between these two is not precise, and just because something is a historical trend doesn't make it benevolent.

Would you prefer for the blueprints for an atom bomb to be transparent? How about the 1918 Spanish flu genome? The latter has already happened. Some people believe that the former circulates as well.

13Oct/095

The Miami Herald’s Nutty Nanotech Intro: ” Tiny Technology May Yield Major Finds — and Possible Perils”

See here for a humorous and somewhat sensationalistic introduction to nanotechnology.

Point 1: Sci-fi authors should not be quoted in an ostensibly scientific introduction to anything unless their statements have been endorsed by experts.

Point 2: Grey goo is a small issue. This is Nanotechnology 101. Why don't they teach this in nanotechnology classes in schools?

Point 3: Quote sources. They say: "Some fear toxicity to human lungs as lethal as that from asbestos. Others fear mini-weapons of mass destruction in the hands of terrorists." Who are "some" and "others"? Why do reporters obfuscate or just make things up to sell a story?

In any case, I hope that there are continued investigations into the possible toxic properties of nanoparticles, but sensationalistic comparisons to asbestos are probably not helpful. As for grey goo, numerous articles have been published since 2003 and earlier debunking this risk, especially from the Center for Responsible Nanotechnology. Any scientists or researchers interested in productive nanosystems know that systems of specialized, stationary assemblers would make much more sense (and be far cheaper and easier to engineer) than free-floating self-replicating assemblers.

Filed under: nanotechnology 5 Comments
12Oct/090

Mass Production of Artificial Skin Within Two Years?

There's a news item on work towards the mass production of artificial skin from May that I missed. The Fraunhofer Institute for Interfacial Engineering and Biotechnology is working on the process. They expect to finish their "skin factory" about two years from when the article was published, so approximately May 2011. Good luck! Artificial skin could have a particularly pleasant utilitarian impact because it might free animals from some forms of chemical testing.

12Oct/094

Courtney Boyd Myers: “The Transhumanists Arrive”

Courtney Boyd Myers has another short article up about the Summit, this time at Forbes. It is interesting how often she uses the T word given that, as far as I can tell, the word appeared nowhere in our program and wasn't invoked onstage. (I might have missed it.) The use of the word must, then, be due to independent inference, putting the pieces together and realizing the obviousness that many of the people attending the Summit have a transhumanist bent or can be described as transhumanists (in some cases, whether they like it or not.) It's unusual, though, how many transhumanists are still afraid of calling themselves transhumanists even though they are de facto transhumanists. It reminds me of the phenomenon of obvious goths that insist they are not goths. They keep insisting, yet no one is fooled.

Filed under: transhumanism 4 Comments
12Oct/096

This is Your Brain on Cryonics

While we're on the topic of cryonics, I am reminded of a letter I wrote to Alcor a while back:

Hello,

I'm a cryonicist and life extension advocate. To help promote the idea of
cryonics, I think it would be a good idea to have available on the Internet
micrograph images of frozen and unfrozen brain tissue, to show the
difference. Do you have any available, or know where I could get some?

Thank you,
Michael

Dr. Brian Wowk kindly responded:

Hi Michael. There are lots of cryopreserved brain micrographs
on the Alcor website. Some of them are after rewarming, and others
were obtained actually in the cryopreserved state by a technique
called freeze-substitution.

http://www.alcor.org/AboutCryonics/index.html

http://www.alcor.org/sciencefaq.htm

http://www.alcor.org/Library/html/braincryopreservation1.html

http://www.alcor.org/Library/html/cambridge.html

http://www.alcor.org/Library/html/annals.html

http://www.alcor.org/notablequotes.html

http://www.alcor.org/Library/html/biology.html

Regards,
Brian

From the quotes page, here is an image of vitrified hippocampus:

(Click for larger.) The page says, "This is "your brain on cryonics": Transmission electron micrograph of tissue rewarmed from -130 °C after in-situ vitrification of a whole mammalian brain. This is essentially normal looking brain tissue (hippocampal region). Not only is there no "intracellular goo," no "hamburger," and no "pulverization and destruction," there is no ice damage whatsoever!"

So, in Dale's post on cryonics, when he talks about the brain being "hamburgerized" -- he is making no sense. Vitrified brains don't get "hamburgerized". Dale probably knows about vitrification, so he is just forwarding propaganda because he is politically and morally uncomfortable with cryonics. That is because cryonics symbolizes the affirmation of the individual and potential avoidance of death in a way that can be offensive to hyper-socialistic, here-and-now-and-nothing-else politics. Well, too bad.

Filed under: cryonics 6 Comments
11Oct/090

Joshua Fox Answers 10 Questions

Josh Fox steps forward as the first SIAI supporter to answer Popular Mechanics' 10 questions. Regarding the notion of missing the Singularity, a quote by Dan Clemmensen comes to mind:

Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon.

A superintelligent AI might actually choose to announce itself much more subtly -- we don't know. I just doubt it's something we'd miss.

Filed under: singularity No Comments
11Oct/090

Post-Political Utilitarianism

Robert Wilbin takes a look at the ideas I presented in this post and develops them on his own, even including a diagram.

Filed under: politics No Comments
11Oct/0973

Omni-Directional Treadmill

This has obvious VR applications.

Filed under: videos 73 Comments
10Oct/0948

Survivalist References

Since Popular Mechanics is focusing on survivalism, now is a good time to reference Nuclear War Survival Skills and Patriots. The latter was written by a right-wing Christian bigot, so apply salt as necessary, but many of the logistical points address what would be necessary to survive if there is a nuclear war or a hydrogen bomb is detonated over the US (EMP, lol!) It would be hard. In fact, I know it's impossible for me to both maximize my effectiveness to the Singularity and care too much about survivalism. Survivalism is important to consider, however, because the fact is that human society and civilization is a delicate thing. Food and water go away, and you have millions of psychos -- fast.

For a real underground survivalist text, see the The Killer Karavans by Kurt Saxon. Again, written by a bigot, but still, very realistic and sad. :( It could happen tomorrow. Cities need constant trucks to bring us food, water, and gasoline, otherwise everyone will get desperate.

Nuclear War Survival Skills is also funny/sad because it points out how essentially all bomb shelters built throughout the Cold War had insufficient ventilation. If there were a nuclear war, there would have been a hell on Earth as people in bomb shelters would be forced to kick each other out into the fallout cloud just so they would have enough oxygen to breathe.

Filed under: nuclear 48 Comments
10Oct/093

Popular Mechanics on Singularity Summit 2009

Popular Mechanics has coverage of Singularity Summit. Slightly weird coverage, but, uh, whatever. Being exposed to the idea of the Singularity since 2001, I consider it normal and boring (even annoying) rather than weird or fantastic. Homo sapiens surpassed lesser intelligences -- why is it such a shocker than Homo sapiens will eventually be surpassed intellectually? I guess it is the hard takeoff version that inspires evaluations of weirdness. Well, if it were physically impossible to develop manufacturing and robotics technology vastly more powerful than human beings, then there would be no hard takeoff, but that doesn't seem to be the case. Humans are midway on the "Great Chain of Being", which makes sense given that we should expect ourselves to be "typical" intelligences for anthropic reasons.

Quote:

You can see how believable and even plausible a technological singularity seems once you take a few things for granted. If it were possible to improve your memory with a digital device, for example, then everybody would want one, because not having such a device would put you at a disadvantage to those who had such technology. Then an escalation of biodigital enhancement would naturally occur until some people were walking around with more microchips than neurons. At some point the hand off between human intelligence and machine intelligence would have occurred. And that's just one possible singularity scenario.

Um, duh?

10Oct/091

Singularity Skeptic at Haibane.info

Here is a skeptical view on the Singularity by "fledgling otaku", from about a year and a half ago.

The article actually does a nice job of debunking emergent AI from neural nets. Except, it also makes the mistake of assuming an AI must be just like a human brain, which is like thinking a plane ought to be just like a bird. The author even admits a partly-religious view of the human mind.

Then, a major portion of the critique expresses discomfort with uploading. I suspect that at least 50% of humanity will require actually experiencing uploading before they feel comfortable with it. This is reasonable.

Filed under: singularity 1 Comment
10Oct/090

Raeflin On the Utility of Game-Changing Technologies

Raeflin has some thoughts on the comparative utility of game-changing technologies.

The point about Kurzweil is particularly important -- Kurzweil is not working directly towards AGI, though he has helped sponsor AGI conferences. However, SIAI Director of Research Ben Goertzel, Dr. Itamar Arel, Associate Professor of Electrical Engineering and Computer Science at The University of Tennessee, and programmer Scott Livingston will be meeting soon with other researchers to formulate a roadmap to AGI. This was mentioned in their respective talks at Singularity Summit 2009.

SIAI also has an internal research project that has been ongoing for almost a decade, which strongly features attempted improvements on decision theory, including timeless decision theory and reflective decision theory. Developing a mathematically consistent improved decision theory is essential for AGI because the only alternative is to throw a bunch of heuristics together, like Eurisko, which isn't only unlikely to work, but it would be very dangerous if it did work.

Filed under: singularity No Comments