Six legs, insect-inspired locomotion. By takram design.
Imagine building tens of thousands of these, equipping them with botulinum toxin darts and gecko feet, then airdropping them at random points on a battlefield. Better shoot 'em all!
Give them the ability to burrow and self-replicate and then you have something like PKD's "Screamers".
H/t Pink Tentacle.
Jamais Cascio recently appeared on the History Channel's program "That's Impossible!" The episode, only the second in the show, was called "Real Terminators". Here is a series of clips where he appears:
Great stuff, Jamais.
Singularity-relevant press release on PhysOrg, primarily about the "Robobusiness" conference in Boston:
Robots are gaining on us humans. Thanks to exponential increases in computer power -- which is roughly doubling every two years -- robots are getting smarter, more capable, more like flesh-and-blood people.
Matching human skills and intelligence, however, is an enormously difficult -- perhaps impossible -- challenge.
Nevertheless, robots guided by their own computer "brains" now can pick up and peel bananas, land jumbo jets, steer cars through city traffic, search human DNA for cancer genes, play soccer or the violin, find earthquake victims or explore craters on Mars.
At a "Robobusiness" conference in Boston last week, companies demonstrated a robot firefighter, gardener, receptionist, tour guide and security guard.
You name it, a high-tech wizard somewhere is trying to make a robot do it.
A Japanese housekeeping robot can move chairs, sweep the floor, load a tray of dirty dishes in a dishwasher and put dirty clothes in a washing machine.
Intel, the worldwide computer-chip maker, headquartered in Santa Clara, Calif., has developed a self-controlled mobile robot called Herb, the Home Exploring Robotic Butler. Herb can recognize faces and carry out generalized commands such as "please clean this mess," according to Justin Rattner, Intel's chief technology officer.
In a talk last year titled "Crossing the Chasm Between Humans and Machines: the Next 40 Years," the widely respected Rattner lent some credibility to the often-ridiculed effort to make machines as smart as people.
"The industry has taken much greater strides than anyone ever imagined 40 years ago," Rattner said. It's conceivable, he added, that "machines could even overtake humans in their ability to reason in the not-so-distant future."
Programming a robot to perform household chores without breaking dishes or bumping into walls is hard enough, but creating a truly intelligent machine still remains far beyond human ability.
Artificial intelligence researchers have struggled for half a century to imitate the staggering complexity of the brain, even in creatures as lowly as a cockroach or fruit fly. Although computers can process data at lightning speeds, the trillions of ever-changing connections between animal and human brain cells surpass the capacity of even the largest supercomputers.
"One day we will create a human-level artificial intelligence," wrote Rodney Brooks, a robot designer at the Massachusetts Institute of Technology, in Cambridge, Mass. "But how and when we will get there -- and what will happen after we do -- are now the subjects of fierce debate."
"We're in a slow retreat in the face of the steady advance of our mind's children," agreed Paul Saffo, a technology forecaster at Stanford University in Stanford, Calif. "Eventually, we're going to reach the point where everybody's going to say, 'Of course machines are smarter than we are.'
"The truly interesting question is what happens after if we have truly intelligent robots," Saffo said. "If we're very lucky, they'll treat us as pets. If not, they'll treat us as food."
Some far-out futurists, such as Ray Kurzweil, an inventor and technology evangelist in Wellesley Hills, a Boston suburb, predict that robots will match human intelligence by 2029, only 20 years from now. Other experts think that Kurzweil is wildly over-optimistic.
According to Kurzweil, robots will prove their cleverness by passing the so-called "Turing test." In the test, devised by British computing pioneer Alan Turing in 1950, a human judge chats casually with a concealed human and a hidden machine. If the judge can't tell which responses come from the human and which from the machine, the machine is said to show human-level intelligence.
"We can expect computers to pass the Turing test, indicating intelligence indistinguishable from that of biological humans, by the end of the 2020s," Kurzweil wrote in his 2005 book, "The Singularity Is Near."
To Kurzweil, the "singularity" is when a machine equals or exceeds human intelligence. It won't come in "one great leap," he said, "but lots of little steps to get us from here to there."
Kurzweil has made a movie, also titled "The Singularity Is Near: A True Story About the Future," that's due in theaters this summer.
Intel's Rattner is more conservative. He said that it would take at least until 2050 to close the mental gap between people and machines. Others say that it will take centuries, if it ever happens.
Some eminent thinkers, such as Steven Pinker, a Harvard cognitive scientist, Gordon Moore, a co-founder of Intel, and Mitch Kapor, a leading computer scientist in San Francisco, doubt that a robot can ever successfully impersonate a human being.
It's "extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation," Kapor said. "While it is possible to imagine a machine obtaining a perfect score on the SAT or winning 'Jeopardy' -- since these rely on retained facts and the ability to recall them -- it seems far less possible that a machine can weave things together in new ways or ... have true imagination in a way that matches everything people can do."
Nevertheless, roboticists are working to make their mechanical creatures seem more human. The Japanese are particularly fascinated with "humanoid" robots, with faces, movements and voices resembling their human masters.
A fetching female robot model from the National Institute of Advanced Industrial Science and Technology lab in Tsukuba, Japan, sashays down a runway, turns and bows when "she" meets a real girl.
"People become emotionally attached" to robots, Saffo said. Two-thirds of the people who own Roombas, the humble floor-sweeping robots, give them names, he said. One-third take their Roombas on vacation.
At a technology conference last October in San Jose, Calif., Cynthia Breazeal, an MIT robot developer, demonstrated her attempts to build robots that mimic human and social skills. She showed off "Leonardo," a rabbity creature that reacts appropriately when a person smiles or scowls.
"Robot sidekicks are coming," Breazeal said. "We already can see the first distant cousins of R2-D2," the sociable little robot in the "Star Wars" movies.
Other MIT researchers have developed an autonomous wheelchair that understands and responds to commands to "go to my room" or "take me to the cafeteria."
So far, most robots are used primarily in factories, repeatedly performing single tasks. The Robotics Institute of America estimates that more than 186,000 industrial robots are being used in the United States, second only to Japan. It's estimated that more than a million robots are being used worldwide, with China and India rapidly expanding their investments in robotics.
ON THE WEB
Video of Herb, the Home Exploring Robotic Butler: http://personalrobotics.intel-research.net/projects/herb.php
Videos of Japanese housekeeping robots: http://www.physorg.com/news153079697.html
Video of Cynthia Breazeal demonstrating her sociable robots: http://intelligence.org/media/singularitysummit2008/cynthiabreazeal
Video of Rodney Brooks discussing obstacles to robot intelligence: http://tinyurl.com/d3vnrs
Video of Ray Kurzwell lecture on machines matching human intelligence: http://tinyurl.com/dk4cxc
Saffo again... he's popping up everywhere. He recently started a blog on SF Gate, as well. To Saffo, I must say: it's probably not a matter of luck. How we create the first general AI(s) will have a profound influence on their continued development. Acting as if itâ€™s entirely luck implies either superior winning power on www.WorldPokerTour.com, or more probably, that the motivational content of these AI minds does not matter.
Interesting how the Singularity Summit is referred to in this press release as "a technology conference". I guess you could call it that, though it's quite an atypical one.
Though Ray Kurzweil is called a "far out" futurist, he seems to be getting more attention than any other prominent futurist in the world recently.
I'm reading a somewhat obscure piece of data, a 1995 mailing list post to sci.nanotech by Chris Phoenix titled, "Partial design for macro-scale machining self-replicator". It's really interesting.
The design was prompted by another poster, Will Ware, who wrote "Speaking as a practicing engineer, I think there would actually be a lot of value to making macroscopic replicators, even if there's no new science involved. The absence of new science does not mean an engineering job is trivial."
Phoenix's design is based around the idea of a substance whose cured form is hard enough to easily machine its non-cured form. The uncured form is converted selectively into the cured form through exposure to UV rays from the Sun. Phoenix describes his design:
Here's the intended capability: To machine blocks of soft material into complex parts, with typical dimensions of a few inches, maximum dimension 20 inches, precision 1/100 inch, smooth circles of any diameter (made by rotating a platform with a cutter held off-center); minimum hole/concave curve 1/16 inch diameter, plus an ability to cut narrow V grooves. To assemble parts into machines with volume of up to a yard cubed. To execute a long, complex program for these operations, with the possibility of detecting and correcting errors.
The rest is quite fascinating, I suggest you read it. There's a potential science fair project or multi-billion dollar company idea in there for anyone that wants to use it.
I'm pretty interested in the idea of macro-scale self-replicators in general. I've read practically all the material that's out there on the topic, including all of Kinematic Self-Replicating Machines. (By the way, if you join the Lifeboat 500, the people that give $1,000/year to the Lifeboat Foundation, you get a free copy of this book.) Now I'm at the point of reading whatever obscure stuff I can find and thinking about original content. If you know about more obscure stuff or have interesting original ideas, please post them in the comments.
I was poking around in another email discussion that was linked from the Wikipedia page on self-replicating machines, this one was also in 1995 but it was on Extropians. Anthony Napier said:
Is a macro-scale self replicating system feasible?
The only existing one I know of is our entire Earth wide industrial complex. Nothing smaller than Earth scale, nothing bigger than molecular level.
Is that really true? I would think that at least one blacksmith has used all the tools in his shop to make a completely new set of tools, then he had a child that became a blacksmith, thereby closing the loop. (With a little help from Nature, of course.) So we can point to self-replicators that are quite a bit smaller than the Earth scale.
Since we know so little about machine self-replication, it's premature to assume that we know what size scale it will first emerge on. I used to assume nano (because that's where most biological self-replicating machinery exists), but now I think that macro or micro might have easier solutions.
A self-replicating device might be much larger than anyone would suppose. For instance, it might involve a population of thousands of little specialized robots, resembling a little industrial village. No individual robot would be able to fabricate all the parts that make themselves up, but in cooperation, they achieve 100% closure.
Free free to brainstorm concepts for self-replicating machines at any scale. If such a machine were developed, it would prove helpful to our existential risk mitigation agenda by demonstrating that such a thing is possible and that regulations are necessary. It could also have a profoundly beneficial economic impact, by allowing the cheap fabrication of spinoff products built out of materials in the self-replication loop.
Consider this -- a worm robot that burrows through the top layer of soil and is capable of converting it into additional modular segments of itself as quickly as possible. With an efficiency of just 1%, a worm with a 1 cm maw that tunnels through a 100 meters of earth every hour would be able to process roughly 0.785 cc of earth per hour or 1,884 cc (115 cu in) per day. Assuming 7.85 cc of soil is needed to build one robotic segment 1 cm long, we get a growth rate of 0.1 cm per hour or 2.4 cm (1 in) per day. Nothing shocking, really, but the numbers are contrived to be conservative. If the worms could divide (which would be possible if each segment or a small row of segments can be self-sustaining), then exponential replication could quickly overwhelm an ecosystem even if the growth rate is relatively slow. I doubt many predators would be interested in consuming a robot.
Why brainstorm worm robots? Well, the worm motif seems very popular in evolution, and is shared by a number of different evolutionary lineages. The worm body type is the precursor from which all bilateral and complex animals evolved! (Only cnidarians and sponges didn't evolve from worms.) The body cavity inherent in the worm body plan provides a number of benefits that others have been over many times. So, it makes sense that a worm robot might be one of the earliest macroscopic self-replicating robots that could thrive in nature.
Where would such worms get food? The same way that regular worms do, by eating other organisms, just like that insidious fly-eating robot that was developed in 2004.
The worm robot starting point brings up a number of interesting observations and questions. First, how much of a threat could these little buggers be to an ecosystem? Of course, it depends on the growth rate and how well the robot fares in competition with the natives. But let us consider the bare minimum necessary to be an annoyance.
First off, the worm robot can prove to be a major nuisance by making sure to convert the earth into something difficult for other organisms to break down. There are probably several million types of microbes in a typical tonne of earth, but if they all fail to break something down, then it is likely to remain for a very long time. There are many examples of this decomposition-resistance in nature, notably the sponge, which defends itself not so much by aggressive means but by its manifest lack of nutritious value relative to other organisms and the caltrops-shaped calcareous or siliceous spicules that it embeds itself with. Relative to defenses that a human engineer might conjure up by probing the supra-organic design space, this is pretty boring, but it has worked for over 600 million years.
Still, without getting into anything complicated, note that significantly compressing a unit of earth would probably be enough to lower its palatability to microorganisms by a significant margin. Passing around energy currency in a form that bacteria and archaea can't digest (i.e., not glucose or sucrose) could also potentially circumvent most efforts at consumption. Processing the earth into a state whereby an exoskeleton and set of crude membranes can physically exclude microorganisms, accompanied by local microbicidal action at interfaces, could likely make the robot much more difficult to break down, both in action and when deactivated. By thinking outside the boundaries inherent to natural biology, robotics engineers will be able to create new "species" of life capable of shoving aside obstacles and continuing on their merry way.
The problem of such robots for nature-lovers is the way that they'd entirely destroy the environment. One day, lush Amazon Rainforest, three years later, a writhing mass of robotic worms and over a million extinct species. One 1-kg worm robot that reproduces just once every ten days could convert itself into 67 billion of the little monsters (67 million tonnes worth) in just a year. Especially if it intertwined itself with the ecosystem, the only way to kill all of them would be to nuke the whole damn place. Building hunter-killer worm robots wouldn't work, because by the time they were deployed, the original worm robots would have a major advantage.
Implausible, you might say? Negative. Rudimentary worm robots have already been built, and the chemical reactions necessary to convert soil organisms into energy-storing molecules are widely known. All that would be required are advances in MEMS (no molecular manufacturing needed) that allow the worm to distribute nutrients throughout its body and build new segments effectively. In mollusks (as well as worms), the simplest "complex" organisms, cilia are used as an all-purpose mechanism for ferrying nutrients about the body and waste out the anus. Looking at the contemporary lower mollusks, along with their ancestors in the small shelly fauna, one can see that the "concept of a mollusk" is simple at its essence, but it works very well. When the enabling technology is present, these designs will be copied by roboticists with interdisciplinary knowledge in biology.
The only way I can even begin to imagine to address such problems is universal transparency and inbuilt safeguards on all "3D printers" ever manufactured. Of course, there will always be those with excessive confidence in nature to repel synthetic threats (even though microbes can't eat plastic), and to those folks this won't be an issue, but to others, it inspires cause for worry. (Another objection would be the even more inane, "why would someone do this?") It may be a matter of trading privacy for security, a pill many find hard to swallow, but I think the events and pundits of the future will have an answer for you -- deal with it.
Researching the current state of "roboethics" (a lame term that marginalizes "AI ethics", a more-relevant superset of roboethics), I find a bunch of references to a South Korean project to draft a Robot Ethics Charter. All these references occur in March 2007, and they promised the ethics charter would be released in April 2007 and subsequently adopted by the government. However, I can't find it anywhere. Anyone have a clue about where it went? One article summarized the effort as follows:
The prospect of intelligent robots serving the general public brings up an unprecedented question of how robots and humans should be expected to treat each other. South Korea's Ministry of Commerce, Industry and Energy has decided that a written code of ethics is in order.
Starting last November, a team of five members, including a science-fiction writer, have been drafting a Robot Ethics Charter to address and prevent "robot abuse of humans and human abuse of robots." Some of the sensitive subject areas covered in the charter include human addiction to robots, humans treating robots like a spouse, and prohibiting robots from ever hurting a human.
Critics of the charter say that the charter is premature and may not have a practical application once robots are really an integral part of society. Says Mark Tilden, the designer of the toy RoboSapien, "From experience, the problem is that giving robots morals is like teaching an ant to yodel. We're not there yet, and as many of Asimov's stories show, the conundrums robots and humans would face would result in more tragedy than utility."
"Asimov" refers to science-fiction author Isaac Asimov, who created a robot code of ethics for one of his stories. His Three Rules were: (1) a robot could not hurt a human or through inaction allow a human to be harmed, (2) a robot must obey human orders unless those orders would make it violate rule number one, and (3) a robot must protect itself unless that protection would violate the first two rules. These apparently served as inspiration for the South Korean Robot Ethics Charter.
However, South Korea's Ministry of Information and Communication plans to have a robot in every household by 2020. "Personally, I wish to accomplish that objective by 2010," said Oh Sang Rok, head of the ministry's project.
Personally, I think Asimov's Three Laws are a terrible inspiration for any roboethics code. The laws were created to be used as a plot device. When they disintegrated, a story came out of it. Unfortunately, they've actually been taken seriously as a possible solution to the problem of human-unfriendly robots and AI for many decades now. But Asimov himself said, "There was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the 61 words of the Three Laws."
Back in summer 2004, the Singularity Institute launched a website project, "Three Laws Unsafe", a critique of Asimov's Laws riding on the publicity of the "I, Robot" movie. Check out the articles section, which includes a submission by myself.
But yeah, anyone know where that Robot Ethics Charter is, or the names of anyone who was working on it? We need to get our magnifying glasses out and scrutinize that shit.
Got to love the odd, somewhat tweaky way in which the researcher denies the possibility of these robots taking over. And the humorous way that the question is always brought up with advanced robotics research.
Obviously, these robots wouldn't take over -- they're probably dumber than unicellular organisms, at present.
But does that mean that robots of this type won't be the new superior weapon, 20 years or so down the line? No -- they very well could be. Especially for more subtle applications than blowing things up, such as spying, or sabotage.
Also, it gives us a look into how advanced Artificial Intelligences could use robotics to influence the world, a decade or two from now. No need to view this from the lens of sci-fi hysteria, but the prospect of AIs orchestrating swarms of robots will be a near-future reality.