From what I understand, we're currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than "just" billions. The Singularity is the most extreme power discontinuity in history. A probable "winner takes all" effect means that after a hard takeoff (quick bootstrapping to superintelligence), humanity could be at the mercy of an unpleasant dictator or human-indifferent optimization process for eternity.
The question of "human or robot" is one that comes up frequently in transhumanist discussions, with most of the SingInst crowd advocating a robot, and a great many others advocating, implicitly or explicitly, a human being. Human beings sparking the Singularity come in 1) IA bootstrap and 2) whole brain emulation flavors.
Naturally, humans tend to gravitate towards humans sparking the Singularity. The reasons why are obvious. A big one is that people tend to fantasize that they personally, or perhaps their close friends, will be the people to "transcend", reach superintelligence, and usher in the Singularity.
Another reason why is that augmented humans feature so strongly in stories, and in the transhumanist philosophy itself. Superman is not a new archetype, he reflects older characters like Hercules. In case you didn't know, many men want to be Superman. True story.
The idea of a human-sparked Singularity, however, brings about a number of problems. Foremost is the concern that the "Maximilian" and his or her friends or relatives would exert unfair control over the Singularity process and its outcome, perhaps benefiting themselves at the expense of others. The Maximilian and his family might radically improve their intelligence while neglecting the improvement of their morality.
One might assume that greater intelligence, as engineered through WBE (whole brain emulation) or BCI (brain-computer interfacing), necessarily leads to better morality, but this is not the case. Anecdotal experience with humans shows us that humans which gain more information do not necessarily become more benevolent. In some cases, like with Stalin, more information only increases the effect of paranoia and the need for control.
Because human morality derives from a complex network of competing drives, inclinations, decisions, and impulses that are semi-arbitrary, any human with the ability to self-modify could likely go off in a number of possible directions. A gourmand, for instance, might emphasize the sensation of taste, creating a world of delicious treats to eat, while neglecting other interesting pursuits, such as rock climbing or drawing. An Objectivist might program themselves to be truly selfish from the ground up, rather than just "selfish" in the nominal human sense. A negative utilitarian, following his conclusions from the premises, might discover that the surest way of eliminating all negative utility for future generations is simply to wipe out consciousness for good.
Some of these moral directions might be OK, some not so much. The point is that there is no predetermined "moral trajectory" that destiny will take us down. Instead, we will be forced to live in a world that the singleton chooses. For all of humanity to be subject to the caprice of a single individual or small group is unacceptable. Instead, we need a "living treaty" that takes into account the needs of all humans, and future posthumans, something that shows vast wisdom, benevolence, equilibrium, and harmony -- not a human dictator.
Squeaky Clean and Full of Possibilities -- Artificial Intelligence
Artificial Intelligence is the perfect choice for such a living treaty because it is a blank slate. There is no "it" -- AI as its own category. AI is not a thing, but a massive space of diverse possibilities. For those who consider the human mind to be a pattern of information, the pattern of the human mind is one of those possibilities. So, you could create an AI exactly like a human. That would be a WBE, of course.
But why settle for a human? Humans would have an innate temptation to abuse the power of the Singularity for their own benefit. It's not really our fault -- we've evolved for hundreds of thousands of years in an environment where war and conflict were routine. Our minds are programmed for war. Everyone alive today is the descendant of a long line of people who successfully lived to breeding age, had children, and brought up surviving children who had their own children. It sounds simple today, but on the dangerous savannas of prehistoric Africa, this was no small feat. The downside is that most of us are programmed for conflict.
Beyond our particular evolutionary history, all the organisms crafted by evolution -- call them Darwinian organisms -- are fundamentally selfish. This makes sense, of course. If we weren't selfish, we wouldn't have been able to survive and reproduce. The thing with Darwinian organisms is that they take it too far. Only more recently, in the last 70 or so million years, with the evolution of intelligent and occasionally-altruistic organisms like primates and other sophisticated mammals, did true "kindness" make its debut on the world scene. Before that, it was nature, bloody in tooth and claw, for over seven hundred million years.
The challenge with today's so-called altruistic humans is that they have to constantly fight their selfish inclinations. They have to exert mental effort just to stay in the same place. Humans are made by evolution to display a mix of altruistic and selfish tendencies, not exclusively one or the other. There are exceptions, like sociopaths, but the exceptions tend to more frequently be towards the exclusively selfish than the exclusively altruistic.
With AI, we can create an organism that lacks selfishness from the get-go. We can give it whatever motivations we want, so we can give it exclusively benevolent motivations. That way, if we fail, it will be because we couldn't characterize stable benevolence right, not because we handed the world over to a human dictator. The challenge of characterizing benevolence in algorithmic terms is more tractable than trusting a human through the extremely lengthy takeoff process of recursive self-improvement. The first possibility requires that we trust in science, the second, human nature. I'll take science.
I'm not saying that characterizing benevolence in a machine will be easy. I'm just saying it's easier than trusting humans. The human mind and brain are very fragile things -- what if they were to be broken on the way up? The entire human race, the biosphere, and every living thing on Earth might have to answer to the insanity of one overpowered being. This is unfair, and it can be avoided in advance by skipping WBE and pursuing a more pure AI approach. If an AI exterminates humanity, it won't be because the AI is insanely selfish in the sense of a Darwinian organism like a human. It will be because we gave the AI the wrong instructions, and didn't properly transfer all our concerns to it.
One benefit to AI that can't be attained with humans is that an AI can be programmed with special skills, thoughts, and desires to fulfill the benevolent intentions of well-meaning and sincere programmers. That sort of aspiration voiced in Creating Friendly AI (2001) -- which is echoed by the individual people in SIAI -- is what originally drew me to the Singularity Institute and the Singularity movement in general. Using AI as a tool to increase the probability of its own benevolence -- "bug checking" with the assistance of the AI's abilities and eventual wisdom. Within the vast space of possibilities of AI, surely there exists one that we can genuinely trust! After all, every possible mind is contained within that space.
The key word is trust. Because a Singularity is likely to lead to a singleton that remains for the rest of history, we need to do the best job possible ensuring that the outcome benefits everyone and that no one is disenfranchised. Humans have a poor track record for benevolence. Machines, however, once understood, can be launched in an intended direction. It is only through a mystical view of the human brain and mind that qualities such as "benevolence" are seen as intractable in computer science terms.
We can make the task easier by programming a machine to study human beings to better acquire the spirit of "benevolence", or whatever it is we'd actually want an AI to do. Certainly, an AI that we trust would have to be an AI that cares about us, that listens to us. An AI that can prove itself on a wide variety of toy problems, and makes a persuasive case that it can handle recursive self-improvement without letting go of its beneficence. We'd want an AI that would even explicitly tell us if it thought that a human-sparked Singularity would be preferable from a safety perspective. Carefully constructed, AIs would have no motivation to lie to us. Lying is a complex social behavior, though it could emerge quickly from the logic of game theory. Experiments will let us find out.
That's another great thing -- with AIs, you can experiment! It's not possible to arbitrarily edit the human brain without destroying it, and it's certainly not possible to pause, rewind, automatically analyze, sandbox, or do any other tinkering that's really useful for singleton testing with a human being. A human being is a black box. You hear what it says, but it's practically impossible to tell whether the human is telling the truth or not. Even if the human is telling the truth, humans are so fickle and unpredictable that they may change their minds or lie to themselves without knowing it. People do so all the time. It doesn't really matter too much as long as that person is responsible for their own mistakes, but when you take these qualities and couple them to the overwhelming power of superintelligence, an insurmountable problem is created. A problem which can be avoided with proper planning.
I hope I've made a convincing case for why you should consider artificial intelligence as the best technology for launching an Intelligence Explosion. If you'd like to respond, please do so in the comments, and think carefully before commenting! Disagreements are welcome, but intelligent disagreements only. Intelligent agreements only as well. Saying "yea!" or "boo!" without more subtle points is not really interesting or helpful, so if your comments are that simplistic, keep it to yourself. Thank you for reading Accelerating Future.