In late 2008, tech luminary Kevin Kelly, the founding executive editor of Wired magazine, published a critique of what he calls “thinkism” — the idea of smarter-than-human Artificial Intelligences with accelerated thinking and acting speeds developing science, technology, civilization, and physical constructs at faster-than-human rates. The argument over “thinkism” is important to answering the question of whether Artificial Intelligence could quickly transform the world once it passes a certain threshold of intelligence, called the “intelligence explosion” scenario.
Kelly begins his blog post by stating that “thinkism doesn’t work”, specifically meaning that he doesn’t believe that a smarter-than-human Artificial Intelligence could rapidly develop infrastructure to transform the world. After using the Wikipedia definition of the Singularity, Kelly writes that Vernor Vinge, Ray Kurzweil and others view the Singularity as deriving from smarter-than-human Artificial Intelligences (superintelligences) developing the skills to make themselves smarter, doing so at a rapid rate. Then, “technical problems are quickly solved, so that society’s overall …
Here’s a writeup.
Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:
-What is the Singularity? -Is there a substantial chance we will significantly enhance human intelligence by 2050? -Is there a substantial chance we will create human-level AI before 2050? -If human-level AI is created, is there a good chance vastly superhuman AI will follow via an “intelligence explosion”? -Is acceleration of technological trends required for a Singularity? – Moore’s Law (hardware trajectories), AI research progressing faster? -What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??) -Does AI need to be conscious or have human like “intentionality” in order to achieve a Singularity? -What are the potential benefits and risks of the Singularity?
The key discovery of human history is that minds are ultimately mechanical, operate according to physical principles, and that there is no fundamental distinction between the bits of organic matter that process thoughts and bits of organic matter elsewhere. This is called reductionism (in the second sense):
Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents. This can be said of objects, phenomena, explanations, theories, and meanings.
This discovery is interesting because it implies that 1) minds, previously thought to be mystical, can in principle be mass-produced in factories, 2) the human mind is just one possible type of mind and can theoretically be extended or permuted in millions of different ways.
Because of the substantial economic, creative, and moral value …
A new paper by Eliezer Yudkowsky is online on the SIAI publications page, “Complex Value Systems are Required to Realize Valuable Futures”. This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.
Abstract: A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome”, despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least …
I haven’t read this, I’m just posting it because other people are talking about it.
Ray Kurzweil, the prominent inventor and futurist, can’t wait to get nanobots into his brain. In his view, these devices will be equipped with a variety of sensors and stimulators and will communicate wirelessly with computers outside of the body. In addition to providing unprecedented insight into brain function at the cellular level, brain-penetrating nanobots would provide the ultimate virtual reality experience.
+1 for everyone who saw through my lie.
I thought it would be interesting to say stuff not aligned with what I believe to see the reaction.
The original prompt is that I was sort of wondering why no one was contributing to our Humanity+ matching challenge grant.
Maybe because many futurist-oriented people don’t think transhumanism is very important.
They’re wrong. Without a movement, the techno-savvy and existential risk mitigators are just a bunch of unconnected chumps, or in isolated little cells of 4-5 people. With a movement, hundreds or even thousands of people can provide many thousands of dollars worth of mutual value in “consulting” and work cooperation to one another on a regular basis, which gives us the power to spread our ideas and stand up to competing movements, like Born Again bioconservatism, which would have us all die by age 110.
I believe the “Groucho Marxes” — who “won’t join any club that will have them” are sidelining themselves from history. Organized transhumanism is very important. …
What is the point of a beneficial Singularity? A challenging question because there are so many potential benefits. Some of the benefits I enjoy more might not be the same as the benefits you would enjoy. People can disagree.
What kind of Singularity happens depends on what kind of singleton we end up with, but we can wistful and optimistic, right? The Singularity I’m working towards would have the following components:
1) Invention of molecular nanotechnology or superior manufacturing technology, enabling the production of near-unlimited food, housing, clean water, and other products.
2) Enforcement of local “volitional bubbles” that reduce the rate of non-consensual violent crime to zero. I’d be curious to see how altruistic superintelligence or the CEV output would handle cases where people join “fight clubs” where the risk of death is part of the bylaws.
3) Unless the current overall system is objectively optimal even to an altruistic superintelligence, presumably this would be rearranged for the better as well, though exactly how and in light of what drives and freedoms is hard to say. Probably …
Does Knapp know anything about the way existing AI works? Itâ€™s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference.
I think you missed my point. My point is not that AI has to emulate how the brain works, but rather that before you can design a generalized artificial intelligence, you have to have at least a rough idea of what you mean by that. Right now, the mechanics of general intelligence in humans are, actually, mostly unknown.
Whatâ€™s become an interesting area of study in the past two decades are two fascinating strands of neuroscience. The first is that animal brains and intelligence are much better and more complicated than we thought even in the 80s.
The second is that humans, on a macro level, think very differently from animals, even the smartest problem solving animals. We havenâ€™t begun to scratch the surface.
From Mr. Knapp’s recent post:
If Strossâ€™ objections turn out to be a problem in AI development, the â€œworkaroundâ€ is to create generally intelligent AI that doesnâ€™t depend on primate embodiment or adaptations. Couldnâ€™t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?
But Anissmovâ€™s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply â€œworkaroundâ€ the issue misses the underlying point that we canâ€™t yet quantify the difference between human intelligence and machine intelligence. Indeed, itâ€™s become pretty clear that even human thinking and animal thinking is quite different. For example, itâ€™s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I donâ€™t mean on a different level â€” I mean actually different. …
super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we’re likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.
“Human-equivalent AI is unlikely” is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I’ll explain why in the next post.) Stross might not understand that …