Twin AI breakthroughs are circulating this week. The first is being covered pretty much everywhere:
Researchers at Aberystwyth University in Wales and England's University of Cambridge report in Science today that they designed Adam, which is 16.4 feet (five meters) in length, with a height and width of 9.8 feet (three meters) to perform basic biology experiments with minimal human intervention. They describe how the bot operates by relating how he carried out one of his tasks, in this case to find out more about the genetic makeup of baker's yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems.
The same day, another AI breakthrough was published in the journal Science. According to Wired Science, "In just over a day, a powerful computer program accomplished a feat that took physicists centuries to complete: extrapolating the laws of motion from a pendulum's swings."
The details are in "Distilling Free-Form Natural Laws from Experimental Data", by Michael Schmidt and Hod Lipson in Science, Vol. 324, April 3, 2009, and "Automating Science", by David Waltz and Bruce Buchanan in Science, Vol. 324, April 3, 2009.
Can the mainstream start caring about safety issues around advanced self-improving AI now? How many breakthroughs will be necessary before that happens?
Obviously these projects are not AGI, but the question is, "how close will we need to be to AGI before people take AGI seriously?" On AGI's doorstep, or will we actually have some foresight?