In the News Everywhere: AI Breakthroughs

Twin AI breakthroughs are circulating this week. The first is being covered pretty much everywhere:

Researchers at Aberystwyth University in Wales and England’s University of Cambridge report in Science today that they designed Adam, which is 16.4 feet (five meters) in length, with a height and width of 9.8 feet (three meters) to perform basic biology experiments with minimal human intervention. They describe how the bot operates by relating how he carried out one of his tasks, in this case to find out more about the genetic makeup of baker’s yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems.

See coverage from Scientific American, Discover, Yahoo News, WIRED (“Robot Makes Scientific Discovery All by Itself”), and SEED.

The same day, another AI breakthrough was published in the journal Science. According to Wired Science, “In just over a day, a powerful computer program accomplished a feat that took physicists centuries to complete: extrapolating the laws of motion from a pendulum’s swings.”

The details are in “Distilling Free-Form Natural Laws from Experimental Data”, by Michael Schmidt and Hod Lipson in Science, Vol. 324, April 3, 2009, and “Automating Science”, by David Waltz and Bruce Buchanan in Science, Vol. 324, April 3, 2009.

Can the mainstream start caring about safety issues around advanced self-improving AI now? How many breakthroughs will be necessary before that happens?

Obviously these projects are not AGI, but the question is, “how close will we need to be to AGI before people take AGI seriously?” On AGI’s doorstep, or will we actually have some foresight?

Comments

  1. The MSM will start caring as soon as they can actually understand the dangers. That will be some time after they have returned to college and taken some science and technology courses. The MSM operates by simplfying to such an extent as to remove all information content from their stories; ie they tell stories rather than attempt to convey understanding.

  2. Can the mainstream start caring about safety issues around advanced self-improving AI now?

    You still haven’t shown the connecting bridge between the former and the latter. Just because we have designed algorithms that research the world around us doesn’t mean that we have AIs that are recursively self-improving.

    SURE — you and I see it. But then, we already did. These aren’t going to show that sort of thing to anyone who doesn’t already.

  3. DB

    Have you read the paper on the Cornell website Michael. The link to the website is
    http://ccsl.mae.cornell.edu/natural_laws . A link to the paper is at the top of the page.
    This is a very clever combination of calculus and evolutionary algorithms. No good or evil AI’s are going to pop out of this work.

  4. Obviously, DB, the point is not that this research is going to lead to AGI, the question is how many breakthroughs will be necessary before people start looking to the likely future rather than ignoring it.

  5. A breakthrough that results in a “thinking computer” would be what it would take to get people to take this sort of thing seriously, “IMO”.

    Thankfully, given the way of these things, that “thinking computer” is likely to be neither recursively self-improving (mostly it isn’t likely to have access to its own sourcecode and be heavily hardware reliant) //nor// is it likely to be equivalent to human-level intelligence.

    That’ll give us a few year’s breathing room.

  6. Its going to be hard. Constant incremental improvements in Google and other software might desensitize people.

    For a “thinking computer”, it would depend on how its demonstrated. Passing the Turing test? Then it can be dismissed as a fancy chatbot.

  7. Almost enough, but not quite enough for a SEED.

    It’s probably just an Otto Lilienthal’s model.

  8. Blade O'grass

    We are the humans. We can do amazing things. We will work with the A.I. and it will work with us.

    Besides, if all else fails…there is nothing a good blaster won’t solve.

    Or a rock.

  9. Stuart

    The minute that the media think they can turn it into a bogey man that is exactly what they’ll do. That isn’t taking the safety issues seriously.

    People will fear AIs at the point where they are recognisable entities. Nobody is scared of the automated lab robot in the story – it looks and acts nothing like an animal or person. This isn’t taking the safety issues seriously either. Blind fear is a pointless response anyway.

    The real response to safety issues will happen the same way it always does with people – after the fact. We get it wrong and make mistakes, and then we make rules so that we don’t make the same mistakes twice. People die and the world still turns, none of this is new.

    I don’t subscribe to the lab robot of today becoming the terminator of tomorrow, I don’t subscribe to any form of apocalyptic fear. It is as ridiculous as a techno-Rapture is. The future is at once more ordinary and totally unpredictable as it has ever been. The one thing that you can count on when it comes to predictions – the majority are going to be hilariously wrong.

    I think it’s far more likely that any AI that is forced to endure the stupidity of the human race will either figure out interstellar travel and leave, or commit suicide. Can you imagine being born into a world of self centered imbecile primates who’s first response to you is to assume that they are important enough (or any sort of threat) for you want to kill them?

  10. It’s not the number of breakthroughs.

    It’s the one breakthrough that makes a robot humanlike enough that the mainstream will be naturally enticed with it.

    Humans can’t think logically worth a shit.

    All we got is a primitive brain with which to easily recognize faces.

    Okay, that’s oversimplifying things a little bit… but if you catch my drift…

  11. These developments don’t make the kinds of scenarios you’re imagining any more or any less likely in my mind. They are damn interesting though in their own right.

  12. Bob Mottram

    There have been so many bogus claims made about AI in the past that I doubt that people will believe that AGI has arrived until it’s “on the doorstep”. That is, that there exists some tangible demonstration of AGI which cannot be dismissed as just another expert system operating in some narrow problem domain.

  13. Roko — I’ll take a stab at it. Might even be inspirational enough to make me take the dust off of my blog and fire her up again. :)

  14. Blade O'grass

    @ IConrad.

    Ha! Extropians and Gargoyles! And here I was, calling myself a netizen and a geek. When all this time there were adjectives that described me clearley.

    But why a few years breathing room? If a government grew an A.I., said government wouldn’t release it to the wild (I think said government would lobotomize the poor thing), instead, I think said government would keep the thing sandboxed so it couldn’t fix anything. The only hope for A.I. is going to have to be emergence from the internet.

  15. This website really has all of the information I needed concerning this subject and didn’t know who to ask.

  16. Thanks for discussing your ideas right here. The other element is that if a problem occurs with a laptop or computer motherboard, people should not consider the risk connected with repairing that themselves for if it is not done correctly it can lead to irreparable damage to an entire laptop. In most cases, it is safe to approach any dealer of that laptop for any repair of its motherboard. They’ve already technicians that have an know-how in dealing with mobile computer motherboard issues and can have the right diagnosis and conduct repairs.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>