The transhumanist club network, h+, has massively updated their website recently. The site now includes information on starting your own h+ chapter, a blog, gallery (including flyers and posters), and an events page with dynamic content from Google Calendar. This is an excellent site with a clean design.
1) Very light jets (microjets)
Very light jets (VLJs), small jets that use regional airports and carry only about 10 people — have already begun to compete with commercial airliners. Two companies, Dayjet and Linear Air, have recently started service in the United States and Canada. Very light jets, having a much lower overhead than major airliners, could decentralize air travel and make it even more widely available. Bypassing crowded airports, microjets could cut 1-3 hours off a typical plane trip.
2) Quiet Supersonic Transport (QSST)
The Quiet Supersonic Transport (QSST) is a supersonic version of the very light jet, with a speed between Mach 1.6 and 1.8 (1,056 to 1,188 mph). The QSST uses a aerodynamically contoured fuselage to create multiple quiet sonic booms rather than a single loud boom, giving it a sonic wake about a hundred times milder the Concorde’s. Instead of taking six hours to make it from Los Angeles to New York, a cross-country flight in the QSST would only require a
An interesting article by Nick Terse on the possible weaponization of artificially-controlled insects:
“Biological weapons delivered by cyborg insects. It sounds like a nightmare scenario straight out of the wilder realms of science fiction, but it could be a reality, if a current Pentagon project comes to fruition.
Right now, researchers are already growing insects with electronics inside them. They’re creating cyborg moths and flying beetles that can be remotely controlled. One day, the U.S. military may field squadrons of winged insect/machine hybrids with on-board audio, video or chemical sensors. These cyborg insects could conduct surveillance and reconnaissance missions on distant battlefields, in far-off caves, or maybe even in cities closer to home, and transmit detailed data back to their handlers at U.S. military bases.”
Read Managing Magic by the Center for Responsible Nanotechnology. Here’s how it begins:
“It seems like magic. A small appliance, about the size of a washing machine, that is able to manufacture almost anything. It is called a nanofactory. Fed with simple chemical stocks, this amazing machine breaks down molecules, and then reassembles them into any product you ask for. Packed with nanotechnology and robotics, weighing 200 pounds and standing half as tall as a person, it can produce two tons per day of products. Control is simple: a touch screen selects the type and number of products to produce. It costs very little to operate, just the price of materials fed into it. In one hour, $20 worth of chemicals can be converted into 100 pairs of shoes, or 50 shovels, or 200 cell phones, or even a duplicate nanofactory!”
I disagree with Dr. Vinge on the point that a hard takeoff would necessarily be scary. If the people in the bootstrap group care about human welfare, they’d be careful not to disrupt the world too much in too short of a time, as most humans would probably find this disorienting. If a hard takeoff is necessarily objectionable to most humans, the bootstrap group could artificially stretch it into a slow takeoff.
Interview by David Orban.
Thanks to Bob Mottram for initially posting these.
On the AGI mailing list, Ben Goertzel, CEO of Novamente, got into a discussion with a businessman who claimed that AGI researchers would be more likely to work towards artificial general intelligence if there were more financial gain involved, and that current AI researchers are only in the business for financial gain, as they’re only human. Ben’s response sheds light on the way that AGI researchers actually think:
“Singularitarian AGI researchers, even if operating largely or partly in the business domain (like myself), value the creation of AGI far more than the obtaining of material profits.
I am very interested in deriving $$ from incremental steps on the path to powerful AGI, because I think this is one of the better methods available for funding AGI R&D work.
But deriving $$ from human-level AGI really is not a big motivator of mine. To me, once human-level AGI is obtained, we have something of dramatically more interest than accumulation of any amount of wealth.
Yes, I assume that if I succeed in creating a human-level …
Perhaps you’ve heard of MEMS, microelectromechanical systems, a field being invested in heavily by governments and corporations. In MEMS, the components are usually between 10 and 100 microns in size. Using MEMS, you can build gear systems smaller than a dust mite. The military is looking into MEMS to build spy-bots the size of the smallest bugs.
Beyond MEMS there is NEMS, nanoelectromechanical systems, an area scientists and engineers are just beginning to investigate. NEMS are about a 1000 times smaller than MEMS, with components between 10 and 100 nanometers in size. With NEMS, you could build a complex machine the size of a red blood cell or smaller. Transhumanists hope to use NEMS to improve our health and expand our sensory and motor capabilities.
The Holy Grail of nanotechnology is designing a NEMS that can build other NEMS. This goal has been called molecular nanotechnology (MNT), and it is a topic of controversy within the nanotechnology community. Some futurists and scientists believe MNT is impossible, while others consider it very likely.
There will be no sex in the future.
Or maybe there will be. I just like saying the above phrase to troll people.
It exposes a fallacy in futurist thinking — that “The Future” is this foreign entity that forces everyone to be a certain way, even if we hate it. This is nonsense.
There are almost seven billion people on this planet, and in principle, they could each react in a different way to any given scientific advance or social movement. They all have the human cognitive architecture in common, and it’s important not to underestimate the boundaries this places on our behavioral flexibility, but there is plenty of room for variation within that space.
When a futurist says something about the future — “robots will become smarter than humans in 2030″, for instance — the listener may take this as a threat, or unquestioningly assume that the futurist is saying something they want to happen rather than just a prediction. They see the futurist as a force trying to push the world in that direction, …
I have given six interviews in the last year. All are audio except for the most recent.
Changesurfer Radio – May 5, 2007 Existential risks, AI, genetic engineering and space exploration
Podcasting the Singularity – September 18, 2007 A conversation with Michael Anissimov
The Future and You – March 5, 2008 March 5 episode: Michael Anissimov
FastForward Radio – March 16, 2008 Conversation with Michael Anissimov
Future Blogger – March 24, 2008 Interview: Michael Anissimov