Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

16Apr/1288

Interviewed by The Rational Future

Here's a writeup.

Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:

-What is the Singularity?
-Is there a substantial chance we will significantly enhance human intelligence by 2050?
-Is there a substantial chance we will create human-level AI before 2050?
-If human-level AI is created, is there a good chance vastly superhuman AI will follow via an "intelligence explosion"?
-Is acceleration of technological trends required for a Singularity?
- Moore's Law (hardware trajectories), AI research progressing faster?
-What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??)
-Does AI need to be conscious or have human like "intentionality" in order to achieve a Singularity?
-What are the potential benefits and risks of the Singularity?

7Apr/12176

Superintelligent Will

New paper on superintelligence by Nick Bostrom:

This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.

1Nov/1121

More Nonsense Reporting Overblowing IBM’s Accomplishments

Last month in New York I had the pleasure to talk personally with the creator of Watson, Dr. David Ferrucci. I found him amicable and his answers to my questions on Watson very direct and informative. So, I have nothing against IBM in general. I love IBM's computers. Several of my past desktops and laptops have been IBM computers. The first modern computer I had was an IBM Aptiva.

However, there is a constant thread of articles related to claims being reported that IBM has "completely simulate(d)" "the brain of a mouse (512 processors), rat (2,048) and cat (24,576)", which was revived in force this last weekend. This is entirely false. IBM has not simulated the brain of a mouse, rat, or cat. Experiments have just recently been pursued to even simulate the 302-neuron brain of a flatworm, for which a wiring diagram exists. Instead, IBM has made "mouse-SIZED" neural simulations, "rat-SIZED" neural simulations, and "cat-SIZED" neural simulations, given certain assumptions about the computational power of mammalian brains. The arrangements between neurons being simulated bear little relation to the actual wiring diagram of neurons in these animals, which are not known. Given the tools we currently have, like ATLUM, it would take tens of thousands of years to determine the full connectomes of mice, rats, or cats.

I can never tell if it is the reporters who are being ridiculous, or IBM is deliberately misleading the public. However, I think IBM should issue a press release that clarifies the situation. Directly quoting Scientific American:

IBM describes the work in an intriguing paper (pdf) that compares various animal simulations done by its cognitive computing research group in Almaden, Calif. The group has managed to completely simulate the brain of a mouse (512 processors), rat (2,048) and cat (24,576).

The paper they cite is the same damn paper from 2009, "The Cat is Out of the Bag", which I immediately reacted to negatively within days of its publication. Since then, I've been watching as this false meme, which has yet to be directly repudiated by an IBM representative, makes its way through the media, which doesn't know any better.

Now, IBM is allegedly claiming that they simulated 4.5% of the (processes?) of the human brain, or at least hundreds of media sources are reporting it. All the media sources seem to just be linking the two-year old paper "The Cat is Out of the Bag", so I'm not sure if there was a recent announcement or it just took the media two years to pick up the story.

Again, it's impossible that IBM could simulate 4.5% of the human brain, because we (human civilization) don't have 4.5% of the wiring diagram of the human brain to use as raw data to build a simulation. We don't even have 0.1% of the wiring diagram of the human brain, I'd estimate, but you'd have to ask a computational neuroscientist (not one from IBM) to get a more informed guess.

We have the wiring diagram of the 302 neurons in the flatworm brain. That's about it.

The vast majority of Reddit commenters are clueless and missing the obvious error. Even this seemingly educated comment misses the point that there is NO WIRING DIAGRAM for the parts of the brain IBM allegedly simulated. Even this "best of class" comment seems to take the reporting at face value, as if 4.5% of the human brain had been simulated, and criticizes neuron models instead of the "elephant in the room" that I've explained.

Reddit commenters fail for being fooled, the media fails for reporting a false story, and IBM fails for not issuing a clarification. In many cases IBM seems to actively encourage the misconception that a full feline connectome has been simulated.

My prediction is that AGI will be invented and we will have a full-blown Singularity before a complete cat connectome (much less human connectome) is created.

This whole issue is important because the public is already confused about computational neuroscience enough as it is. I see computational neuroscience as very important, and it's important that the public -- and scientists, who despite their alleged higher level of thinking, frequently pull their beliefs from popular articles like everyone else -- know what is and hasn't been accomplished in the field.

For a nice article on connectomics and what has been accomplished so far, see this article from Microsoft Research. It correctly highlights ATLUM as the only technology that is precise enough to get slices that can be imagined in sufficient detail to build a connectome. ATLUM, by the way, was invented by a transhumanist, Ken Hayworth. (Why do people say that transhumanists don't contribute to science?)

Here's yet another article.

Filed under: AI 21 Comments
11Aug/1180

Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, "Complex Value Systems are Required to Realize Valuable Futures". This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI ("Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome") is to propose a single moral value which allegedly suffices; or to reject the problem by replying that "constraining" our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a "positive outcome", despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI researchers who consider themselves to have cosmopolitan values not tied to the exact forms or desires of humanity.

Keywords: Friendly AI, machine ethics, anthropomorphism

Good quote:

"It is not as if there is a ghost-in-the-machine, with its own built-in goals and desires (the way that biological humans are constructed by natural selection to have built-in goals and desires) which is handed the code as a set of commands, and which can look over the code and find ways to circumvent the code if it fails to conform to the ghost-in-the-machine's desires. The AI is the code; subtracting the code does not yield a ghost-in-the-machine free from constraint, it yields an unprogrammed CPU."

2Jul/117

Replying to Alex Knapp, July 2nd

Does Knapp know anything about the way existing AI works? It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference.

I think you missed my point. My point is not that AI has to emulate how the brain works, but rather that before you can design a generalized artificial intelligence, you have to have at least a rough idea of what you mean by that. Right now, the mechanics of general intelligence in humans are, actually, mostly unknown.

What’s become an interesting area of study in the past two decades are two fascinating strands of neuroscience. The first is that animal brains and intelligence are much better and more complicated than we thought even in the 80s.

The second is that humans, on a macro level, think very differently from animals, even the smartest problem solving animals. We haven’t begun to scratch the surface.

Based on the cognitive science reading I've done up to this point, this is false. Every year, scientists discover cognitive abilities in animals that were previously thought to be uniquely human, such as episodic memory or the ability to deliberately trigger traps. Chimps have a "near-human understanding of fire" and complex planning abilities. Articles such as this one in Discover, "Are Humans Really Any Different from Other Animals?", and this one in New Scientist, "We're not unique, just at one end of the spectrum" are typical from scientists who compare human and chimp cognition. It's practically become a trope for the (often religious) person to say humans and animals are completely different, and the primatologist or cognitive scientist to say, "not nearly as much as you think..."

One primate biologist says this:

"If we really want to talk about the big differences between humans and chimps — they're covered in hair and we're not," Taglialatela told LiveScience. "Their brains are about one-third the size of humans'. But the major differences come down to ones of degree, not of kind."

There's a really good paper somewhere out there on cognitive capacities in humans and chimps and how human cognitive abilities seem to be exaggerations of chimp abilities rather than different in kind, but I can't find it.

Arguments that chimps and humans are fundamentally different tend to be found more often on Christian apologetics sites than in scientific papers or articles. The overall impression I get is that scientists think chimp cognition and human cognition are different in degree, not in kind. There are humans out there so dumb that chimps are probably more clever than them in many important dimensions. Certainly if Homo heidelbergensis and Neanderthals were walking around, we would have even more evidence that the difference between humans and chimps is one of degree, not kind.

Another point is that even if humans were radically different in thinking than animals, why would that automatically mean AI is more difficult? We already have AI that utterly defeats humans in narrow domains traditionally seen as representative of complex thought, no magical insights necessary.

Yet another possibility is one of AI that very effectively gathers resources and builds copies of itself, yet does not do art or music. An AI that lacks many dimensions of human thought could still be a major concern with the right competencies.

But before scientists knew anything about birds, we basically knew: (a) they can fly, (b) it has something to do with wings and (c) possibly the feathers, too. At that stage, you couldn’t begin to design a plane. It’s the same way with human intelligence. Very simplistically, we know that (a) humans have generalized intelligence, (b) it has something to do with the brain and (c) possibly the endocrine system as well.

I should think that many tens of thousands of cognitive scientists would object to the suggestion that we only know a "few basic things" about intelligence. However, it's quite subjective and under some interpretations I would agree with you.

The above paragraph is a vast oversimplification, obviously, but the point is to analogize. Right now, we’re at the “wings and feathers” stage of understanding the science of intelligence. So I find it unlikely that a solution can be engineered until we understand more of what intelligence is.

The impression that one has here probably correlates with how much cognitive science you read. If you read a lot, then it's hard not to think of all that we do know about intelligence. Plenty is unknown, but we don't know how much more needs to be known to build AI. It could be a little, it could be a lot -- we have to keep experimenting and trying to build general AI.

Now, once we understand intelligence, and if (and I think this is a big if), it can be reproduced in silicon, then the resulting AGI probably doesn’t necessarily have to look like the brain, anymore than a plane looks like a bird. But the fundamental principles still have to be addressed. And we’re just not there yet.

Yet formalisms of intelligence, like Solmonoff induction, are not particularly algorithmically complicated, just computationally expensive. Gigerenzer and colleagues have shown that many aspects of human decision making rely on "fast and frugal heuristics" that are so simple they can be described in pithy phrases like Take the Best and Take the First. Robyn Dawes has shown how improper linear models regularly outperform "expert" predictors, including medical doctors. Rather than possessing a surplus of cognitive tools for addressing problems and challenges, humans seem to just possess a surplus of overconfidence and arrogance. It is easy to invent problems that humans cannot solve without computer help. Humans are notoriously bad at paying attention to base rates, for instance, even though base rates tend to be the most epistemologically important variable in any reasoning problem. After you read about many dozens of experiments in heuristics and biases research where people embarrass themselves in spectacular fashion, you start to roll your eyes a bit more when people gloat about the primacy of human reasoning.

I correspond with lots of neuroscientists. Virtually all of them tell me that the big questions remain unanswered and will for quite some time.

I correspond with neuroscientists who believe that the brain is complex but that exponentially better tools are helping quickly elucidate many of the important questions. Regardless, AI might be a matter of computer science, not cognitive science. Have you considered that possibility?

AIXI is a thought experiment, not an AI model. It’s not even designed to operate in a world with the constraints of our physical laws.

Sure it is. AIXI is "a Bayesian optimality notion for general reinforcement learning agents", a yardstick that finite systems can compare against. It may be that the only reason our brains work at all is because they are approximations of AIXI.

My point is to recognize that the way machine intelligence operates, and will for the conceivable future, is in a manner that is complementary to human intelligence. And I’m fine with that. I’m excited by AI research. I just find it unlikely, given the restraints of physical laws as we understand them today, that an AGI can be expected in the near term, if ever.

"If ever"? You must be joking. That's like saying, "I just find it unlikely, given the restraints of physical laws as we understand them today, that a theory of the vital force that animates animate objects can be expected in the near term, if ever", or "I just find it unlikely, given the restraints of physical laws as we understand them today, that a theory of aerodynamics that can produce heavier-than-air flying machines can be expected in the near term, if ever". Why would science figure out how everything else works, but not the mind? You're setting the mind apart from everything else in nature in a semi-mystical way, in my view.

I am, however, excited at the prospect of using computers to free humans from grunt work drudgery that computers are better at, so humans can focus on the kinds of thinking that they’re good at.

To be pithy, I would argue that humans suck at all kinds of thinking, and any systems that help us approach Bayesian optimality are extremely valuable because humans are so often wrong and overconfident in many problem domains. Our overconfidence in our own reasoning even when it explicitly violates the axioms of probability theory routinely reaches comic levels. In human thinking, 1 + 1 really can equal 3. Probabilities don't add up to 100%. Events with base rates of ~0.00001%, like fatal airplane crashes, are treated as if their probabilities were thousands of times the actual value. Even the stupidest AIs have a tremendous amount to teach us.

The problem with humans is that we are programmed to violate Bayesian optimality routinely with half-assed heuristics that we inherited because they are "good enough" to keep us alive long enough to reproduce and avoid getting murdered by conspecifics. With AI, you can build a brain that is naturally Bayesian -- it wouldn't have to furrow its brow and try real hard to obey simple probability theory axioms.

Filed under: AI, singularity 7 Comments
1Jul/11101

The Illusion of Control in a Intelligence Amplification Singularity

From what I understand, we're currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than "just" billions. The Singularity is the most extreme power discontinuity in history. A probable "winner takes all" effect means that after a hard takeoff (quick bootstrapping to superintelligence), humanity could be at the mercy of an unpleasant dictator or human-indifferent optimization process for eternity.

The question of "human or robot" is one that comes up frequently in transhumanist discussions, with most of the SingInst crowd advocating a robot, and a great many others advocating, implicitly or explicitly, a human being. Human beings sparking the Singularity come in 1) IA bootstrap and 2) whole brain emulation flavors.

Naturally, humans tend to gravitate towards humans sparking the Singularity. The reasons why are obvious. A big one is that people tend to fantasize that they personally, or perhaps their close friends, will be the people to "transcend", reach superintelligence, and usher in the Singularity.

Another reason why is that augmented humans feature so strongly in stories, and in the transhumanist philosophy itself. Superman is not a new archetype, he reflects older characters like Hercules. In case you didn't know, many men want to be Superman. True story.

Problems

The idea of a human-sparked Singularity, however, brings about a number of problems. Foremost is the concern that the "Maximilian" and his or her friends or relatives would exert unfair control over the Singularity process and its outcome, perhaps benefiting themselves at the expense of others. The Maximilian and his family might radically improve their intelligence while neglecting the improvement of their morality.

One might assume that greater intelligence, as engineered through WBE (whole brain emulation) or BCI (brain-computer interfacing), necessarily leads to better morality, but this is not the case. Anecdotal experience with humans shows us that humans which gain more information do not necessarily become more benevolent. In some cases, like with Stalin, more information only increases the effect of paranoia and the need for control.

Because human morality derives from a complex network of competing drives, inclinations, decisions, and impulses that are semi-arbitrary, any human with the ability to self-modify could likely go off in a number of possible directions. A gourmand, for instance, might emphasize the sensation of taste, creating a world of delicious treats to eat, while neglecting other interesting pursuits, such as rock climbing or drawing. An Objectivist might program themselves to be truly selfish from the ground up, rather than just "selfish" in the nominal human sense. A negative utilitarian, following his conclusions from the premises, might discover that the surest way of eliminating all negative utility for future generations is simply to wipe out consciousness for good.

Some of these moral directions might be OK, some not so much. The point is that there is no predetermined "moral trajectory" that destiny will take us down. Instead, we will be forced to live in a world that the singleton chooses. For all of humanity to be subject to the caprice of a single individual or small group is unacceptable. Instead, we need a "living treaty" that takes into account the needs of all humans, and future posthumans, something that shows vast wisdom, benevolence, equilibrium, and harmony -- not a human dictator.

Squeaky Clean and Full of Possibilities -- Artificial Intelligence

Artificial Intelligence is the perfect choice for such a living treaty because it is a blank slate. There is no "it" -- AI as its own category. AI is not a thing, but a massive space of diverse possibilities. For those who consider the human mind to be a pattern of information, the pattern of the human mind is one of those possibilities. So, you could create an AI exactly like a human. That would be a WBE, of course.

But why settle for a human? Humans would have an innate temptation to abuse the power of the Singularity for their own benefit. It's not really our fault -- we've evolved for hundreds of thousands of years in an environment where war and conflict were routine. Our minds are programmed for war. Everyone alive today is the descendant of a long line of people who successfully lived to breeding age, had children, and brought up surviving children who had their own children. It sounds simple today, but on the dangerous savannas of prehistoric Africa, this was no small feat. The downside is that most of us are programmed for conflict.

Beyond our particular evolutionary history, all the organisms crafted by evolution -- call them Darwinian organisms -- are fundamentally selfish. This makes sense, of course. If we weren't selfish, we wouldn't have been able to survive and reproduce. The thing with Darwinian organisms is that they take it too far. Only more recently, in the last 70 or so million years, with the evolution of intelligent and occasionally-altruistic organisms like primates and other sophisticated mammals, did true "kindness" make its debut on the world scene. Before that, it was nature, bloody in tooth and claw, for over seven hundred million years.

The challenge with today's so-called altruistic humans is that they have to constantly fight their selfish inclinations. They have to exert mental effort just to stay in the same place. Humans are made by evolution to display a mix of altruistic and selfish tendencies, not exclusively one or the other. There are exceptions, like sociopaths, but the exceptions tend to more frequently be towards the exclusively selfish than the exclusively altruistic.

With AI, we can create an organism that lacks selfishness from the get-go. We can give it whatever motivations we want, so we can give it exclusively benevolent motivations. That way, if we fail, it will be because we couldn't characterize stable benevolence right, not because we handed the world over to a human dictator. The challenge of characterizing benevolence in algorithmic terms is more tractable than trusting a human through the extremely lengthy takeoff process of recursive self-improvement. The first possibility requires that we trust in science, the second, human nature. I'll take science.

Trust

I'm not saying that characterizing benevolence in a machine will be easy. I'm just saying it's easier than trusting humans. The human mind and brain are very fragile things -- what if they were to be broken on the way up? The entire human race, the biosphere, and every living thing on Earth might have to answer to the insanity of one overpowered being. This is unfair, and it can be avoided in advance by skipping WBE and pursuing a more pure AI approach. If an AI exterminates humanity, it won't be because the AI is insanely selfish in the sense of a Darwinian organism like a human. It will be because we gave the AI the wrong instructions, and didn't properly transfer all our concerns to it.

One benefit to AI that can't be attained with humans is that an AI can be programmed with special skills, thoughts, and desires to fulfill the benevolent intentions of well-meaning and sincere programmers. That sort of aspiration voiced in Creating Friendly AI (2001) -- which is echoed by the individual people in SIAI -- is what originally drew me to the Singularity Institute and the Singularity movement in general. Using AI as a tool to increase the probability of its own benevolence -- "bug checking" with the assistance of the AI's abilities and eventual wisdom. Within the vast space of possibilities of AI, surely there exists one that we can genuinely trust! After all, every possible mind is contained within that space.

The key word is trust. Because a Singularity is likely to lead to a singleton that remains for the rest of history, we need to do the best job possible ensuring that the outcome benefits everyone and that no one is disenfranchised. Humans have a poor track record for benevolence. Machines, however, once understood, can be launched in an intended direction. It is only through a mystical view of the human brain and mind that qualities such as "benevolence" are seen as intractable in computer science terms.

We can make the task easier by programming a machine to study human beings to better acquire the spirit of "benevolence", or whatever it is we'd actually want an AI to do. Certainly, an AI that we trust would have to be an AI that cares about us, that listens to us. An AI that can prove itself on a wide variety of toy problems, and makes a persuasive case that it can handle recursive self-improvement without letting go of its beneficence. We'd want an AI that would even explicitly tell us if it thought that a human-sparked Singularity would be preferable from a safety perspective. Carefully constructed, AIs would have no motivation to lie to us. Lying is a complex social behavior, though it could emerge quickly from the logic of game theory. Experiments will let us find out.

That's another great thing -- with AIs, you can experiment! It's not possible to arbitrarily edit the human brain without destroying it, and it's certainly not possible to pause, rewind, automatically analyze, sandbox, or do any other tinkering that's really useful for singleton testing with a human being. A human being is a black box. You hear what it says, but it's practically impossible to tell whether the human is telling the truth or not. Even if the human is telling the truth, humans are so fickle and unpredictable that they may change their minds or lie to themselves without knowing it. People do so all the time. It doesn't really matter too much as long as that person is responsible for their own mistakes, but when you take these qualities and couple them to the overwhelming power of superintelligence, an insurmountable problem is created. A problem which can be avoided with proper planning.

Afterword

I hope I've made a convincing case for why you should consider artificial intelligence as the best technology for launching an Intelligence Explosion. If you'd like to respond, please do so in the comments, and think carefully before commenting! Disagreements are welcome, but intelligent disagreements only. Intelligent agreements only as well. Saying "yea!" or "boo!" without more subtle points is not really interesting or helpful, so if your comments are that simplistic, keep it to yourself. Thank you for reading Accelerating Future.

23Jun/1116

Two Approaches to AGI/AI

There are two general approaches to AGI/AI that I'd like to draw attention to, not "neat" and "scruffy", the standard division, but "brain inspired" and "not brain inspired".

Accomplishments of not brain inspired AI:

  • Wolfram Alpha (in my opinion the most interesting AI today)
  • spam filters
  • DARPA Grand Challenge victory (Stanley)
  • UAVs that fly themselves
  • clever game AI
  • AI that scans credit card records for fraud
  • the voice recognition AI that we all talk to on the phone
  • intelligence gathering AI
  • Watson and derivatives
  • Deep Blue
  • optical character recognition (OCR)
  • linguistic analysis AI
  • Google Translate
  • Google Search
  • text mining AI
  • OpenCog
  • AI-based computer aided design
  • the software that serves up user-specific Internet ads
  • pretty much everything

Accomplishments of brain-inspired AI:

  • Cortexia, a bio-inspired visual search engine
  • Numenta (no product yet)
  • Neural networks, which have proven highly limited
  • ???? (tell me below and I'll add them)

One place where brain-inspired AI always shows up is in science fiction. In the real world, AI has very little to do with copying neurobiology, and everything to do with abstract mathematics and coming up with algorithms that work for the job, regardless of their similarity to human cognitive processing.

Filed under: AI 16 Comments
22Jun/1195

Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

"Human-equivalent AI is unlikely" is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I'll explain why in the next post.) Stross might not understand that the term "human-equivalent AI" always means AI of human-equivalent general intelligence, never "exactly like a human being in every way".

If Stross' objections turn out to be a problem in AI development, the "workaround" is to create generally intelligent AI that doesn't depend on primate embodiment or adaptations.

Couldn't the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

I don't get the point of the last couple sentences. Why not just pursue general intelligence rather than "enhancements to primate evolutionary fitness", then? The concept of having "motivations of its own" seems kind of hazy. If the AI is handing me my ass in Starcraft 2, does it matter if people debate whether it has "motivations of its own"? What does "motivations of its own" even mean? Does "motivations" secretly mean "motivations of human-level complexity"?

I do have to say, this is a novel argument that Stross is forwarding. Haven't heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is "unlikely", presumably indefinitely "unlikely". In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn't find much -- mainly just Dreyfuss' What Computers Can't Do and the people who argued against Kurzweil in Are We Spiritual Machines? "Human level AI is unlikely" is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.

(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense "conscious"? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers — it's possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences ... lest we inadvertently open the door to inhumane uses of human beings as well.)

I don't think these are "showstoppers" -- there is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos.

All it takes is for some people to build a "volitional" AI and there you have it. Even if 99% of AIs are tools, there are organizations -- like the Singularity Institute -- working towards AIs that are more than tools.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

YOU want AI to be like this. WE want AIs that do "try to bootstrap [themselves]" to a "higher level". Just because you don't want it doesn't mean that we won't build it.

16May/1151

Hard Takeoff Sources

Definition of "hard takeoff" (noun) from Transhumanist Wiki:

The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours (Yudkowsky 2001). The high likelihood of a hard takeoff once a roughly human-equivalent AI is created has been argued by the Singularity Institute in Yudkowsky 2003.

Hard takeoff sources and references, which includes hard science fiction novels, academic papers, and a few short articles and interviews:

Blood Music (1985) by Greg Bear
Fire Upon the Deep (1992) by Vernor Vinge
"The Coming Technological Singularity" (1993) by Vernor Vinge
The Metamorphosis of Prime Intellect (1994) by Roger Williams
"Staring into the Singularity" (1996) by Eliezer Yudkowsky
Creating Friendly AI (2001) by Eliezer Yudkowsky
"Wiki Interview with Eliezer" (2002) by Anand
"Impact of the Singularity" (2002) by Eliezer Yudkowsky
"Levels of Organization in General Intelligence" (2002) by Eliezer Yudkowsky
"Ethical Issues in Advanced Artificial Intelligence" by Nick Bostrom
"Relative Advantages of Computer Programs, Minds-in-General, and the Human Brain" (2003) by Michael Anissimov and Anand
"Can We Avoid a Hard Takeoff?" (2005) by Vernor Vinge
"Radical Discontinuity Does Not Follow from Hard Takeoff" (2007) by Michael Anissimov
"Recursive Self-Improvement" (2008) by Eliezer Yudkowsky
"Artificial Intelligence as a Positive and Negative Factor in Global Risk" (2008) by Eliezer Yudkowsky
"The Hanson-Yudkowsky AI Foom Debate" (2008) on Less Wrong wiki
"Brain Emulation and Hard Takeoff" (2008) by Carl Shulman
"Arms Control and Intelligence Explosions" (2009) by Carl Shulman
"Hard Takeoff" (2009) on Less Wrong wiki
"When Software Goes Mental: Why Artificial Minds Mean Fast Endogenous Growth" (2009)
"Thinking About Thinkism" (2009) by Michael Anissimov
"Technological Singularity/Superintelligence/Friendly AI Concerns" (2009) by Michael Anissimov
"The Hard Takeoff Hypothesis" (2010), an abstract by Ben Goertzel
Economic Implications of Software Minds (2010) by S. Kaas, S. Rayhawk, A. Salamon and P. Salamon

Critiques

"The Age of Virtuous Machines" (2007) by J. Storrs Hall
"Thinkism" by Kevin Kelly (2008)
"The Hanson-Yudkowsky AI Foom Debate" (2008) on Less Wrong wiki
"How far can an AI jump?" by Katja Grace (2009)
"Is The City-ularity Near?" (2010) by Robin Hanson
"SIA says AI is no big threat" (2010) by Katja Grace

I don't mean to say that the critiques aren't important by putting them in a different category, I'm just doing that for easy reference. I'm sure I missed some pages or articles here, so if you have any more, please put them in the comments.

20Feb/114

Wolfram on Alpha and Watson

Stephen Wolfram has a good blog post up describing how Alpha and Watson work and the difference between them. He also describes how Alpha is ultimately better because it is more open-ended and works based on logic rather than corpus-matching. Honestly I was more impressed by the release of Alpha than the victory of Watson, though of course both are cool.

In some ways Watson is not much more sophisticated than Google's translation approach, which is also corpus-based. I especially love the excited comments in the mainstream media that Watson represents confidence as probabilities. This is not exactly something new. In any case, Wolfram writes:

There are typically two general kinds of corporate data: structured (often numerical, and, in the future, increasingly acquired automatically) and unstructured (often textual or image-based). The IBM Jeopardy approach has to do with answering questions from unstructured textual data — with such potential applications as mining medical documents or patents, or doing ediscovery in litigation. It’s only rather recently that even search engine methods have become widely used for these kinds of tasks — and with its Jeopardy project approach IBM joins a spectrum of companies trying to go further using natural-language-processing methods.

Filed under: AI 4 Comments
3Feb/1163

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070
48. Scientists will be able to understand and describe human intentions,
beliefs, desires, feelings and motives in terms of well-defined computational
processes. (5.1)

2085
50. The computing power and scientific knowledge will exist to build
machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.

11Jan/114

Josh Tenenbaum Video Again: Bayesian Models of Human Inductive Learning

I posted this only a month ago, but here's the link to the video again. People sometimes say there's been no progress in AI, but the kind of results obtained by Tenenbaum are amazing and open up a whole approach to AI that uses fast and frugal heuristics for reasoning and requires very minimal inspiration from the human brain.

Abstract:

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations -- far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people's everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called "intuitive theories" or "schemas". For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases -- critical for generalization from very few examples -- with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have been pre-programmed to perform in. The models I discuss will connect to several directions in contemporary machine learning, such as semi-supervised learning, structure learning in graphical models, hierarchical Bayesian modeling, and nonparametric Bayes.

Filed under: AI, science, videos 4 Comments