Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

15Sep/101

Michael Anissimov: “Don’t Fear the Singularity, but Be Careful: Friendly AI Design” at Foresight 2010 Conference

Michael Anissimov: "Don't Fear the Singularity, but Be Careful: Friendly AI Design" at Foresight 2010 Conference from Foresight Institute on Vimeo.

Filed under: AI, videos 1 Comment
10Sep/101

Nanowerk Links Selection 9/10/10

Nanowerk always has interesting news items closely related to the subject matter of this blog. Here's some recent ones.

Gene-silencing nanoparticles may put end to mosquito pest

iGEM team helps prevent rogue use of synthetic biology

Nanotechnology coatings produce 20 times more electricity from sewage

Team designs artificial cells that communicate and cooperate like biological cells

NanoRidge Materials Signs Contract for New Defense Armor

Wear-a-BAN - Unobtrusive wearable human to machine wireless interface

Toward a new generation of superplastics

Scientists discover a way to use a gallium arsenide nanodevice as a signal processor at terahertz speeds

8Sep/107

Another Nick Bostrom Quote

"One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible."

-- Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence"

7Sep/1018

Jaron Lanier: the End of Human Specialness

Lanier's latest eye-roller is up at The Chronicle of Higher Education.

Decay in the belief in self is driven not by technology, but by the culture of technologists, especially the recent designs of antihuman software like Facebook, which almost everyone is suddenly living their lives through. Such designs suggest that information is a free-standing substance, independent of human experience or perspective. As a result, the role of each human shifts from being a "special" entity to being a component of an emerging global computer.

Uh, OK. I agree in some sense... on Facebook, I've said in response to David Pearce that the site "makes us more trivial people than ever" and shortens our attention spans. I often find myself agreeing with "Luddite" Andrew Keen, who is unfairly put down by open-everything fanatic and geek darling Larry Lessig. Even from this natural "Luddite" perspective that I hold, Lanier's article still seems odd.

Facebook does have the potential to enrich lives and humanness rather than turn everything into information, when it is used in moderation. If you know any teenagers, you can see how they easily and seamlessly integrate online messaging with real world mutual interest and even obsession. If anything, technology enables a kind of hyper-sociality for them that makes most people over 35 uncomfortable.

Also, it's different when you're part of the club versus outside it. I have noticed a syndrome whereby famous people tend to shy away from Facebook, as if it is too plebian for their tastes. They never even really try it. As far as I can tell from simple searches, Lanier is too cool to have a Facebook at all.

Even Andrew Keen has a Facebook page, with the humorous tagline "the anti-christ of Silicon Valley".

Lanier writes:

This shift has palpable consequences. For one thing, power accrues to the proprietors of the central nodes on the global computer. There are various types of central nodes, including the servers of Silicon Valley companies devoted to searching or social-networking, computers that empower impenetrable high finance (like hedge funds and high-frequency trading), and state-security computers. Those who are not themselves close to a central node find their own cognition gradually turning into a commodity. Someone who used to be able to sell commercial illustrations now must give them away, for instance, so that a third party can make money from advertising. Students turn to Wikipedia, and often don't notice that the acceptance of a single, collective version of reality has the effect of eroding their personhood.

Wikipedia has some problems, but by and large, it massively increases knowledge. If I smashed every Americans' television and made them read Wikipedia in the time they spent watching TV or movies, people in bars and on the street would be a lot less boring to talk to. Not everyone is as wealthy as Mr. Lanier and can buy as many books as they like. Still, there definitely and most obviously is a place for knowledge outside of Wikipedia. Wikipedia, when used as a starting point and not the final word, is a fantastic tool. Just because some people lazily use it as the final word does not mean that it is universally bad. The same people would use dead tree encyclopedias as the final word, anyway.

This shift in human culture is borne by software designs, and is driven by a new sort of "nerd" religion based around a core belief that a global brain is not only emerging but will replace humanity. It is often claimed, in the vicinity of institutions like Silicon Valley's Singularity University, that the giant global computer will upload the contents of human brains to grant them everlasting life in the computing cloud.

Interestingly, I may be part of the "nerd religion" Lanier is describing, if the religion consists of feeling that human-friendly Artificial General Intelligence could do a tremendous amount of good in the world and is worth pursuing vigorously. However, I consider talk of global brains to essentially be nonsense. A choir is only as good as its worst member, and human cognition and organizations are constrained by similar rules. No single unit of contribution to any project can be greater or better than the brilliance of the smartest human, and the only reason we're so oblivious to this is because humans are the only general intelligences that we are evolved to model and think about. We also don't like to think any thoughts that make ourselves and our society seem any less than awesome.

The problem is, social feelings create such positive affect, it makes us want to ignore the simple truth that a group of humans is just that -- a group of humans -- and not a superintelligence as defined by Bostrom or Vinge.

Still, I do think it would be cool to be an upload in some kind of computing cloud, so maybe there is a connection here.

There is right now a lot of talk about whether to believe in God or not, but I suspect that religious arguments are gradually incorporating coded debates about whether to even believe in people anymore.

Maybe this signifies movement towards non-anthropocentric theories of personhood and ethics? If so, sounds swell to me.

Filed under: AI, transhumanism 18 Comments
5Sep/101

Assorted Links September 6th, 2010

Robin Hanson on Who Should Exist? and Ways to Pay to Exist.

IEEE Spectrum has an interview with Ratan Kumar Sinha, who designed India's new thorium reactor.

The popular website "The Big Think" has a couple transhumanist writers, Parag and Ayesha Khanna. Their latest article, Can Hollywood Redesign Humanity? continues forward the H+/Hollywood connection which has been promoted previously by Jason Silva and others. "Documentaries Ponder the Future" is another one of their articles.

5Sep/1057

Scott Locklin on Nanotechnology and Drexler

Some of you may have been following Scott Locklin's "reality check" on nanotechnology, which was linked by CrunchGear and Hacker News.

My opinion of the post is that is confuses Drexlerian nanotech with nanotechnology "in general", and makes many major errors, including denying the existence of micromachines and nano-sized elements that drive larger systems.

The article is also wrong because it claims that, in his book, Eric Drexler is just porting macroscale designs to the nano-world, but the entire work (Nanosystems) takes great pains to analyze the differences between the nanoscale and macroscale and introduce engineering innovations that could be a good starting point for true molecular manufacturing. Another error the article makes is suggesting that Drexler dismisses using biology as tools for nanomachines, which is ironic considering that Drexler advocates "molecular and biomolecular design and self-assembly" approaches to molecular nanotechnology, and often discusses the protein folding path on his blog.

Drexler posted a response to Locklin in the comments section:

Hi Scott,

In my view, molecular and biomolecular design and self-assembly are the most promising directions for lab research in atomically precise nanotechnology. There's been enormous progress -- complex, million-atom atomically-precise frameworks, etc.  -- but much of the work isn't called "nanotechnology," and this leaves many observers of the field confused about where it stands. I follow this topic in my blog, Metamodern.com.

Regarding the longer-term prospects for this branch of nanotechnology, there's a publication that offers good starting point for serious discussion.

The technical analysis that I presented in my book Nanosystems: Molecular Machinery, Manufacturing, and Computation, (it's based on my MIT dissertation) was examined in a report issued by the National Academy of Sciences, on "The Technical Feasibility of Site-Specific Chemistry for Large-Scale Manufacturing". The report finds no show-stoppers. It notes uncertainties regarding potential system performance and "optimum research paths", however, and closes with a call for funding experimental research.

This report was prepared by a scientific committee convened by the U.S. National Research Council in response to a request from Congress. It is based on the scientific literature, and on an NRC committee workshop with a range of invited experts and extensive follow-on discussion and evaluation.

I think that this report (and the Battelle/National Labs technology roadmap) deserves more attention from serious thinkers. It deflates a lot of mythology about a topic that just might be real and important.

If either of these publications has been mentioned above, I missed it.

In general, I think Locklin's post is a very well-designed piece of flamebait, and I commend him for drawing attention to his post. Some group of people really love talking about nanotechnology, and they need some outlet, and this is a fine outlet of the week. Locklin is right that a lot of nanotechnology is just chemistry or materials science with a cool name slapped onto it, but certainly not all of it.

Funny quote from the comments thread: "any sufficiently advanced technology is indistinguishable from a rigged demo".

5Sep/1040

Doubt Thrown on Uncle Fester’s Botulism Recipe

In the comments, Martin said:

I wonder how accurate it is. Uncle Fester became underground famous in the 90s when he published books on meth and acid manufacture, but other clandestine chemists criticized his syntheses for being inaccurate.

From this small snippet, it sounds like he wants you to go out and find the right Clostridium species and strains in soil and culture them yourself, which sounds as impractical as his suggestion in the acid book to grow acres of ergot-infested rye. :)

Any more comments on why this is impractical? It sounds much simpler than growing acres of ergot-infested rye. He describes how he would isolate spores, first by heating the culture (this kills anything that is not a spore), then encouraging growth in an anoxic environment (kills anything that is not anaerobic). This leaves only anaerobic bacteria derived from spores.

The book does claim that botulinum germs are "fussy about what they like to grow in, its pH, and its temperature" and that "This need to exclude air from the environment where the germs are growing is the most difficult engineering challenge to the aspiring cultivator of Clostridia botulinum", so he's not saying that it's a cakewalk.

Of course, many of these underground books (Anarchist Cookbook...) are rife with misinformation. Anyone serious about producing botulism toxin would need actual biochemical knowledge and multiple corroborating sources. Still, there's a lot of information in this particular book that would at least provide a compelling starting point.

It's worth noting that Uncle Fester probably never synthesized all the compounds described in his book, which includes over half a dozen different types of nerve gas. He repeatedly points out that synthesizing these chemicals is a risk to the life of the person performing the synthesis. In some parts of the book, he names sources, like literature released by the military, but the vast majority of his book lacks citations.

Filed under: biology, risks 40 Comments
3Sep/1077

Instructions for Mass Manufacture of Botulinum Toxin Freely Available Online

Properly delivered from a plane, a few grams of botulinum toxin could kill hundreds of thousands, if not more, in a major city.

Silent Death by "Uncle Fester" has the full process instructions, including details on optimal delivery.

The LD-50 of botulinum injected into chimpanzees is 50 nanograms.

Combine it with effective microbots, and you have a situation where anyone can kill anyone without accountability.

This is one of the reasons I want a Friendly AI "god" (really more like a machine) to watch over me is that the dangers will simply multiply beyond human capability to manage.

Here's a bit of an excerpt from my version of Silent Death:

Botulin is the second most powerful poison known, taking the runner up position to a poison made by an exotic strain of South Pacific coral bacteria. The fatal dose of pure botulin is in the neighborhood of 1 microgram, so there are 1 million fatal doses in a gram of pure botulin.

The bacteria that makes botulin, Clostridia botulinum, is found all over the world. A randomly chosen soil sample is likely to contain quite a few spores of this bacteria. Spores are like seeds for bacteria, and can withstand very harsh treatment. This properly will come in very handy in any attempt to grow botulism germs, because other germs can be wiped out by heating in hot water, leaving the spores to germinate and take over once they cool down. Much more on this later.

Another very important property of botulism germs is that they can't survive exposure to air. The oxygen in it kills them, but does not kill their spores. Whatever toxin the germs made before their demise also survives. This needs to exclude air from the environment where the germs are growing is the most difficult engineering challenge to the aspiring cultivator of Clostridia botulinum.

Finally, all botulism germs are not created equal. There are subgroups within the species that make toxins that vary immensely in their potency. They are called types: A, B, C, D, E, F and 84. Type A is by far the most deadly, followed by type B and 84. THe other ones we won't even bother to discuss. Also within a single type, there are individual differences in how much toxin a given strain will produce. Breeding and gene manipulation have a lot to do with this, and our government (and the Russkies as well) have put a lot of effort into picking out strains that make an inordinate amount of toxin. The champion as of about 30 years ago was the Hall strain, but I'm sure that they've come up with something better since then. The Hall strain of type A was able to make 300 human fatal doses of botulin per ml of broth it grew in.

Here we will explore the two major levels of use for botulin as an attack weapon: the individual or small group assassination, and the large scale assault with the poison in a manner similar to nerve gas.

Very informative! As a Russian, I love the "Russkies" anachronism.

99.9% of the population will dismiss the above as not a big deal, due to wishful thinking. It's all just words on the page, until people start dying.

Filed under: risks 77 Comments