This idea sounds familiar. From the website of the US Air Force:
8/10/2009 - HANSCOM AIR FORCE BASE, Mass. (AFNS) -- The convergence of "exponentially advancing technologies" will form a "super-intelligence" so formidable that it could avert war, according to one of the world's leading futurists.
Dr. James Canton, CEO and chairman of the Institute for Global Futures, a San Francisco-based think tank, is author of the book "The Extreme Future" and an adviser to leading companies, the military and other government agencies.
He is consistently listed among the world's leading speakers and has presented to diverse audiences around the globe.
It's good to hear that the world's leading futurists are slowly catching up to the position that I've been arguing for since 2001, when I was still a teenager.
Canton seems familiar with the singleton concept and views the US as rushing towards an unchallenged status:
"The superiority of convergent technologies will prevent war," Doctor Canton said, claiming their power would present an overwhelming deterrent to potential adversaries. While saying that the U.S. will build these super systems faster and better than other nations, he acknowledged that a new arms race is already under way.
If things go as they have been, with no third parties entering the game, the US military could eventually create a superintelligence, but it will be a different beast than the hundred billion odd humans that came before it. A superintelligence is something fundamentally new. I predict that a superintelligence will only play games that it knows it can win, and will probably keep itself a relative secret until it's already won. A superintelligence can hold bigger ideas in its head than you can. It depends heavily on what sort of superintelligence we're talking about, but an AI-derived superintelligence in particular might be able to rapidly integrate spare processing power into its cognitive functions. Human working memory can only hold 5-7 items at once, a superintelligence's working memory might be able to hold millions of complex symbols simultaneously.
Why no war?
"The fundamental macroeconomics on the planet favor peace, security, capitalism and prosperity," he said. Doctor Canton projects that nations, including those not currently allied, will work together in using these smart technologies to prevent non-state actors from engaging in disruptive and deadly acts.
For the long-term, yes, but it seems like short-term war might be necessary to create a secure environment, in some cases. To a human-indifferent superintelligence with no "moral common sense", but rather with a goal system hacked together in a half-assed way, a "secure environment" is one where all humans are dead and the world is arranged in precisely the way it wants, perhaps consisting of quadrillions of paper clips or computers containing animated gifs of smiley faces.
Now for the part that mentions advanced AI specifically:
"There's no way for the human operator to look at an infinite number of data streams and extract meaning," he said. "The question then is: How do we augment the human user with advanced artificial intelligence, better software presentation and better visual frameworks, to create a system that is situationally aware and can provide decision options for the human operator, faster than the human being can?"
He said he believes the answers can often be found already in what he calls 'edge cultures.'
We got your edge culture right here. What was once only an obscure concern of a few transhumanists in the 90s has now become a mainstream interest among futurists, AI researchers, and even military strategists.
Doctor Canton said he believes that more sophisticated artificial intelligence applications will transform business, warfare and life in general. Many of these are already embedded in systems or products, he says, even if people don't know it.
In terms of robotics, he predicts "a real sea change" will come as we move from semi-autonomous to fully-autonomous units.
"That will be accompanied by a great debate, because of the 'Terminator' model," he said. "It scares people." But he doesn't think people should be alarmed by the prospect of independently functioning robots.
He goes on to say that robots won't be given superhuman intelligence, though superhuman intelligence will presumably come into existence in non-robotic platforms. What Canton needs to realize is that there is no clear division between non-robotic IT systems and robotic IT systems, and that division will continue to fade. Independently functioning robots are an inevitability, and if they aren't infused with human-equivalent or human-surpassing kindness and morality, we are screwed.
"Robots will help fight and prevent wars," he said, noting that they will have the ability to sense, analyze, authenticate and engage, but that humans will always be in position to check their power.
Ha ha ha ha, right. Our tribe will always be #1. No one can stand up to us. We da best. Go Team Human!
It's not too late, Dr. Canton! Instead of sweeping the challenge of machine morality under the carpet, you can address it as the tangled problem that it is, and encourage the world to contribute resources to solving it.