Hi, I just thought I’d repeat some general points about AI and our future.
If human-equivalent AI is possible, this is a huge, huge deal. It would basically mean that you could turn inanimate matter into intelligence. Say that it requires about 500 teraflops (Tflops), roughly equivalent to one of the fastest supercomputers today, to run a human-equivalent AI program.
A really fast supercomputer costs about $100 million. As you may know, the cost of computing power tends to fall exponentially with time. Even if this doesn’t continue forever, it seems like it will continue until 2020 or so at the very least.
Around 2020, 2025, 2030, or thereabouts, it seems reasonable to say that a 500 Tflop computer would cost in the ballpark of $1,000, if not less. If such a computer were sufficient to run a human-level AI, it would make sense for your random company to buy these computers and run them alongside conventional staff. They would be substantially cheaper. After all, these AIs could think all day and night without food, and their cognitive architectures could be boosted by direct access to number-crunching capabilities. They could share thoughts in a common format, instantaneously.
If one of the AIs became a genius, the others could just copy the cognitive features that gave the original AI those capabilities. The entire collective would never be far behind the leader, in contrast to human collectives, where our wetware is static and cannot be improved.
To actually influence the world directly, it would be helpful for these AIs to develop some robotic avatar. This would be using the robotics of 2020-2030. It would be reasonable to assume that the robotics chosen might be quite flexible and capable, especially considering that the AIs themselves could assist in reprogramming, fine tuning, research, and development.
Bacteria are idiotic, yet capable of turning a tonne of organic waste into bacterial biomass over a course of hours. Human-equivalent AIs would be smart, and have great incentives to convert raw materials into robotic or biological bodies for their habitation. Such AIs could even uncover the principles of thought and boost themselves beyond the human level, even if their access to computing power remains roughly static.
Combine AI with advanced robotics, add the motivation to improve both, and you have a potentially abrupt and disruptive transition on your hands.
What baffles me is when pundit say: “surely, such AIs would lack the capability to become major players in the human world in any short-term timeframe”.
My question would be: “how do you know?”
We humans cannot put ourselves in the shoes of an intelligence that has complete access to its source code, can rearrange its cognitive architecture to optimize its performance on narrow problems, share thoughts with its comrades at the speed of light, transfer itself from point to point on the globe at the speed of light, directly integrate itself with scientific instruments as sensory modalities, blend together autonomic and deliberative processes in a thousand ways that humans can’t, form beliefs and update them in mathematically rigorous ways, and so on.
Such an intelligence could come a long way in a really short time, or perhaps not. The point is that we don’t know. If a pundit expresses skepticism about the idea, their opinion is more likely to reflect the limitations they know apply to humans — not limitations applied to AIs.
It seems easier to argue that human-equivalent AI is flatly impossible than it is to argue that human-equivalent AI wouldn’t have a huge impact on the world once developed. It seems most reasonable to proceed as if it would.