My friend and associate Peter de Blanc has an interesting post up recently, on how the point-estimate nature of popular futurist prediction signifies a fundamentally non-probabilistic way of thinking about the future and possible future technologies. We tend to think in terms of black-and-white, yes-or-no, rather than probabilities, because it’s easier for us to handle. For instance, most people don’t represent the likelihood of catastrophic climate change as a probability — they tend to think in terms of “it will happen” or “it won’t”. I find myself falling into this way of thinking constantly, and have to exert deliberate effort to preserve a probabilistic frame of mind.
So, a negative review of the Summit has finally been posted, by Maxwell Barbakow and Jacob Albert at the Yale Daily News, the student paper.
Reading the beginning of the article, it seems as if Max and Jacob were prodded into going by an associate or something, because they show that they have no clue about the entire topic, and are negatively predisposed to it from the starting line. This is demonstrated by the quote:
Though they seemed incomprehensible at the time, we came to a better understanding of the attendeesâ€™ motives for schlepping from various parts of the country to New York, once we got a better grasp of the tenets behind the Singularity.
There’s nothing wrong with that… but then, why are you going to a semi-advanced conference on a topic you dislike? Why should your review be taken seriously if you openly admit that you got yourself in over your head by going in the first place? Isn’t it clear that a negative predisposition from the start is going to influence as you look to …
Oh my god… unlimited transparency, openness, and “democratization” are not automatically good things? That’s the conclusion that Lawrence Lessig seems to have finally come to, years and years late, in a recent article at The New Republic. Here’s a quote:
How could anyone be against transparency? Its virtues and its utilities seem so crushingly obvious. But I have increasingly come to worry that there is an error at the core of this unquestioned goodness. We are not thinking critically enough about where and when transparency works, and where and when it may lead to confusion, or to worse. And I fear that the inevitable success of this movement â€“- if pursued alone, without any sensitivity to the full complexity of the idea of perfect openness â€“- will inspire not reform, but disgust. The â€œnaked transparency movement,â€ as I will call it here, is not going to inspire change. It will simply push any faith in our political system over the cliff.
You have “come to worry”, only now? At the end of 2009? Not years ago, when the …
The Miami Herald’s Nutty Nanotech Intro: ” Tiny Technology May Yield Major Finds — and Possible Perils”
See here for a humorous and somewhat sensationalistic introduction to nanotechnology.
Point 1: Sci-fi authors should not be quoted in an ostensibly scientific introduction to anything unless their statements have been endorsed by experts.
Point 2: Grey goo is a small issue. This is Nanotechnology 101. Why don’t they teach this in nanotechnology classes in schools?
Point 3: Quote sources. They say: “Some fear toxicity to human lungs as lethal as that from asbestos. Others fear mini-weapons of mass destruction in the hands of terrorists.” Who are “some” and “others”? Why do reporters obfuscate or just make things up to sell a story?
In any case, I hope that there are continued investigations into the possible toxic properties of nanoparticles, but sensationalistic comparisons to asbestos are probably not helpful. As for grey goo, numerous articles have been published since 2003 and earlier debunking this risk, especially from the Center for Responsible Nanotechnology. Any scientists or researchers interested in productive nanosystems know that systems of specialized, stationary assemblers would make much more …
There’s a news item on work towards the mass production of artificial skin from May that I missed. The Fraunhofer Institute for Interfacial Engineering and Biotechnology is working on the process. They expect to finish their “skin factory” about two years from when the article was published, so approximately May 2011. Good luck! Artificial skin could have a particularly pleasant utilitarian impact because it might free animals from some forms of chemical testing.
Courtney Boyd Myers has another short article up about the Summit, this time at Forbes. It is interesting how often she uses the T word given that, as far as I can tell, the word appeared nowhere in our program and wasn’t invoked onstage. (I might have missed it.) The use of the word must, then, be due to independent inference, putting the pieces together and realizing the obviousness that many of the people attending the Summit have a transhumanist bent or can be described as transhumanists (in some cases, whether they like it or not). It’s unusual, though, how many transhumanists are still afraid of calling themselves transhumanists even though they are de facto transhumanists. It reminds me of the phenomenon of obvious goths that insist they are not goths. They keep insisting, yet no one is fooled.
While we’re on the topic of cryonics, I am reminded of a letter I wrote to Alcor a while back:
I’m a cryonicist and life extension advocate. To help promote the idea of cryonics, I think it would be a good idea to have available on the Internet micrograph images of frozen and unfrozen brain tissue, to show the difference. Do you have any available, or know where I could get some?
Thank you, Michael
Dr. Brian Wowk kindly responded:
Hi Michael. There are lots of cryopreserved brain micrographs on the Alcor website. Some of them are after rewarming, and others were obtained actually in the cryopreserved state by a technique called freeze-substitution.
From the quotes page, here is an image of vitrified hippocampus:
(Click for larger.) The page says, “This is “your brain on cryonics”: Transmission electron micrograph of tissue rewarmed from -130Â°C after in-situ vitrification …
Sorry Arthur, but I’d guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon.
A superintelligent AI might actually choose to announce itself much more subtly — we don’t know. I just doubt it’s something we’d miss.
Since Popular Mechanics is focusing on survivalism, now is a good time to reference Nuclear War Survival Skills and Patriots. The latter was written by a right-wing Christian bigot, so apply salt as necessary, but many of the logistical points address what would be necessary to survive if there is a nuclear war or a hydrogen bomb is detonated over the US (EMP, lol!) It would be hard. In fact, I know it’s impossible for me to both maximize my effectiveness to the Singularity and care too much about survivalism. Survivalism is important to consider, however, because the fact is that human society and civilization is a delicate thing. Food and water go away, and you have millions of psychos — fast.
For a real underground survivalist text, see the The Killer Karavans by Kurt Saxon. Again, written by a bigot, but still, very realistic and sad. :( It could happen tomorrow. Cities need constant trucks to bring us food, water, and gasoline, otherwise everyone will get desperate.
Popular Mechanics has coverage of Singularity Summit. Slightly weird coverage, but, uh, whatever. Being exposed to the idea of the Singularity since 2001, I consider it normal and boring (even annoying) rather than weird or fantastic. Homo sapiens surpassed lesser intelligences — why is it such a shocker than Homo sapiens will eventually be surpassed intellectually? I guess it is the hard takeoff version that inspires evaluations of weirdness. Well, if it were physically impossible to develop manufacturing and robotics technology vastly more powerful than human beings, then there would be no hard takeoff, but that doesn’t seem to be the case. Humans are midway on the “Great Chain of Being”, which makes sense given that we should expect ourselves to be “typical” intelligences for anthropic reasons.
You can see how believable and even plausible a technological singularity seems once you take a few things for granted. If it were possible to improve your memory with a digital device, for example, then everybody would want one, because not having such a device would put you at a …