Nanowerk always has interesting news items closely related to the subject matter of this blog. Here’s some recent ones.
“One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.”
– Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence”
Lanier’s latest eye-roller is up at The Chronicle of Higher Education.
Decay in the belief in self is driven not by technology, but by the culture of technologists, especially the recent designs of antihuman software like Facebook, which almost everyone is suddenly living their lives through. Such designs suggest that information is a free-standing substance, independent of human experience or perspective. As a result, the role of each human shifts from being a “special” entity to being a component of an emerging global computer.
Uh, OK. I agree in some sense… on Facebook, I’ve said in response to David Pearce that the site “makes us more trivial people than ever” and shortens our attention spans. I often find myself agreeing with “Luddite” Andrew Keen, who is unfairly put down by open-everything fanatic and geek darling Larry Lessig. Even from this natural “Luddite” perspective that I hold, Lanier’s article still seems odd.
Facebook does have the potential to enrich lives and humanness rather than turn everything into information, when it is used in moderation. If you know …
IEEE Spectrum has an interview with Ratan Kumar Sinha, who designed India’s new thorium reactor.
The popular website “The Big Think” has a couple transhumanist writers, Parag and Ayesha Khanna. Their latest article, Can Hollywood Redesign Humanity? continues forward the H+/Hollywood connection which has been promoted previously by Jason Silva and others. “Documentaries Ponder the Future” is another one of their articles.
My opinion of the post is that is confuses Drexlerian nanotech with nanotechnology “in general”, and makes many major errors, including denying the existence of micromachines and nano-sized elements that drive larger systems.
The article is also wrong because it claims that, in his book, Eric Drexler is just porting macroscale designs to the nano-world, but the entire work (Nanosystems) takes great pains to analyze the differences between the nanoscale and macroscale and introduce engineering innovations that could be a good starting point for true molecular manufacturing. Another error the article makes is suggesting that Drexler dismisses using biology as tools for nanomachines, which is ironic considering that Drexler advocates “molecular and biomolecular design and self-assembly” approaches to molecular nanotechnology, and often discusses the protein folding path on his blog.
Drexler posted a response to Locklin in the comments section:
In my view, …
In the comments, Martin said:
I wonder how accurate it is. Uncle Fester became underground famous in the 90s when he published books on meth and acid manufacture, but other clandestine chemists criticized his syntheses for being inaccurate.
From this small snippet, it sounds like he wants you to go out and find the right Clostridium species and strains in soil and culture them yourself, which sounds as impractical as his suggestion in the acid book to grow acres of ergot-infested rye. :)
Any more comments on why this is impractical? It sounds much simpler than growing acres of ergot-infested rye. He describes how he would isolate spores, first by heating the culture (this kills anything that is not a spore), then encouraging growth in an anoxic environment (kills anything that is not anaerobic). This leaves only anaerobic bacteria derived from spores.
The book does claim that botulinum germs are “fussy about what they like to grow in, its pH, and its temperature” and that “This need to exclude air from the environment where the …
Properly delivered from a plane, a few grams of botulinum toxin could kill hundreds of thousands, if not more, in a major city.
Silent Death by “Uncle Fester” has the full process instructions, including details on optimal delivery.
The LD-50 of botulinum injected into chimpanzees is 50 nanograms.
Combine it with effective microbots, and you have a situation where anyone can kill anyone without accountability.
This is one of the reasons I want a Friendly AI “god” (really more like a machine) to watch over me is that the dangers will simply multiply beyond human capability to manage.
Here’s a bit of an excerpt from my version of Silent Death:
Botulin is the second most powerful poison known, taking the runner up position to a poison made by an exotic strain of South Pacific coral bacteria. The fatal dose of pure botulin is in the neighborhood of 1 microgram, so there are 1 million fatal doses in a gram of pure botulin.
The bacteria that makes botulin, Clostridia botulinum, is found all over …