The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.
If this process manages to give you a sufficiently accurate simulation
Prof. Myers objected vociferously, writing, "It won’t. It can’t.", subsequently launching into a reasonable attack against the notion of scanning a living human brain at nanoscale resolution with current fixation technology. The confusion is that Prof. Myers is criticizing a highly specific idea, the notion of exhaustively simulating every axon and dendrite in a live brain, as if that were the only proposal or even the central proposal forwarded by Sandberg and Bostrom. In fact, on page 13 of the report, the authors present a table that includes 11 progressively more detailed "levels of emulation", ranging from simulating the brain using high-level representational "computational modules" to simulating the quantum behavior of individual molecules. In his post, Myers writes as if the 5th level of detail, simulating all axons and dendrites, is the only path to whole brain emulation (WBE) proposed in the report (it isn't), and also as if the authors are proposing that WBE of the human brain is possible with present-day fixation techniques (they aren't).
In fact, the report presents Whole Brain Emulation as a technological goal with a wide range of possible routes to its achievement. The narrow method that Myers criticizes is only one approach among many, and not one that I would think is particularly likely to work. In the comments section, Myers concurs that another approach to WBE could work perfectly well:
This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.
An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.
But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.
But, the report does not mandate that a "brute force dismantling and reconstruction of every cell in the brain" is the only way forward for uploading. This makes it look as if Myers did not read the report, even though he claims, "I read the paper".
Slicing and scanning a brain will be necessary but by no means sufficient to create a high-detail Whole Brain Emulation. Surely, it is difficult to imagine how the salient features of a brain could be captured without scanning it in some way.
What Myers seems to be objecting to is a kind of dogmatic reductionism, "brain in, emulation out" direct scanning approach that is not actually being advocated by the authors of the report. The report is non-dogmatic, writing that a two-phase approach to WBE is required, where "The first phase consists of developing the basic capabilities and settling key research questions that determine the feasibility, required level of detail and optimal techniques. This phase mainly involves partial scans, simulations and integration of the research modalities." In this first phase, there is ample room for figuring out what the tissue actually does. Then, that data can be used simplify the scanning and representation process. The required level of understanding vs. blind scan-and-simulate is up for debate, but few would claim that our current neuroscientific level of understanding suffices.
Describing the difficulties of comprehensive scanning, Myers writes:
And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue?
Measuring the epigenetic state of every nucleus is not likely to be required to create convincing, useful, and self-aware Whole Brain Emulations. No neuroscientist familiar with the idea has ever claimed this. The report does not claim this, either. Myers seems to be inferring this claim himself through his interpretation of Hallquist's brusque 2-sentence summary of the 130-page report. Hallquist's sentences need not be interpreted this way -- "slicing and scanning" the brain could be done simply to map neural network patterns rather than to capture the epigenetic state of every nucleus.
Next, Myers objects to the idea that brain emulations could operate at faster-than-human speeds. He responds to a passage in "Intelligence Explosion: Evidence and Import", another paper cited in the Hallquist post which claims, "Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly." To this, Myers says:
You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed… how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.
At first read, it almost seems in this objection as if Prof. Myers does not understand the concept that software can be run faster if it is running on a faster computer. After reading this post carefully, it doesn't seem as if this is what he actually means, but since the connotation is there, the point is worth addressing directly.
Software is a series of electric signals passing through logic gates on computers. The software is agnostic to the processing speed of the underlying computer. The software is a pattern of electrons. The pattern is there whether the clock speed of the processor is 2 kHz or 2 GHz. When and if software is ported from a 2 kHz computer to a 2 GHz computer, it does not stand up and object to this "tweaking the clock speed". No "waving of hands" is required. The software may very well be unable to detect that the substrate has changed. Even if it can detect the change, it will have no impact on its functioning unless the programmers especially write code that makes it react.
Speed change in software is allowed. If the hardware can support the speed change, pressing a button is all it takes to speed the software up. This is a simple point.
The crux of Myers' objection seems to actually be about the interaction of the simulation with the environment. This objection makes much more sense. In the comments, Carl Shulman responds to Myers' objection:
This seems to assume, contrary to the authors, running a brain model at increased speeds while connected to real-time inputs. For a brain model connected to inputs from a virtual environment, the model and the environment can be sped up by the same factor: running the exact same programs (brain model and environment) on a faster (serial speed) computer gets the same results faster. While real-time interaction with the outside would not be practicable at such speedup, the accelerated models could still exchange text, audio, and video files (and view them at high speed-up) with slower minds.
Here, there seems to be a simple misunderstanding on Myers' part, where he is assuming that Whole Brain Emulations would have to be directly connected to real-world environments rather than virtual environments. The report (and years of informal discussion on WBE among scientists) more or less assumes that interaction with the virtual environment would be the primary stage in which the WBE would operate, with sensory information from an (optional) real-world body layered onto the VR environment as an addendum. As the report describes, "The environment simulator maintains a model of the surrounding environment, responding to actions from the body model and sending back simulated sensory information. This is also the most convenient point of interaction with the outside world. External information can be projected into the environment model, virtual objects with real world affordances can be used to trigger suitable interaction etc."
It is unlikely that an arbitrary WBE would be running at a speed that lines it up precisely with the 200 Hz firing rate of human neurons, the rate at which we think. More realistically, the emulation is likely to be much slower or much faster than the characteristic human rate, which exists as a tiny sliver in a wide expanse of possible mind-speeds. It would be far more reasonable -- and just easier -- to run the WBE in a virtual environment with a speed suited to its thinking speed. Otherwise, the WBE would perceive the world around it running at either a glacial pace or a hyper-accelerated one, and have a difficult time making much sense of either.
Since the speed of the environment can be smoothly scaled with the speed of the WBE, the problems that Myers cites with respect to "turn[ing] it up to 11" can be duly avoided. If the mind is turned up to 11, which is perfectly possible given adequate computational resources, then the virtual environment can be turned up to 11 as well. After all, the computational resources required to simulate a detailed virtual environment would pale in comparison to those required to simulate the mind itself. Thus, the mind can be turned up to 11, 12, 13, 14, or far beyond with the push of a button, to whatever level the computing hardware can support. Given the historic progress of computing hardware, this may well eventually be thousands or even millions of times the human rate of thinking. Considering minds that think and innovate a million times faster than us might be somewhat intimidating, but there it is, a direct result of the many intriguing and counterintuitive consequences of physicalism.
There isn't enough in the world.
Not enough wealth to go around, not enough space in cities, not enough medicine, not enough intelligence or wisdom. Not enough genuine fun or excitement. Not enough knowledge. Not enough solutions to global problems.
What we need is more. And we need it soon. The world population is doubling every 34 years. Instead of turning back the clock, we must move towards the future.
There is a bare minimum that we should demand out of the future. Without this bare minimum, we're just running in place. Here is what I think that minimum is:
1) More space
2) More health
3) More water
4) More time
5) More intelligence
There is actually a lot of space on this earth. About 90 million square kilometers of land isn't covered in snow or mountains. That's about 5,000 times larger than the New York City metro area. Less than 1% of this land has any appreciable population density. Everywhere outside of Europe, there are vast districts the size of Texas with no more than a few thousand people. The world is "crowded" because of logistics, not space.
The main constraints on space are transportation and infrastructure rather than lack of actual land. Most population centers are around the coast and its natural harbors. Is this because the rest of the land is uninhabitable? No. It's because being on the coast drives the local economy. What if you can drive the economy without the coast?
With better technologies, we can decentralize infrastructure and spread out more. The most important factors are energy and water. If you can secure these and cheap transportation, many areas can be made habitable.
For energy, the only way to get around centralized solutions is to make your own. Looking forward 10-20 years, this means solar panels. Only solar panels have the versatility needed to work anywhere. Full-spectrum solar panels can generate energy even if the sky is grey. Right now, the main barrier is cost, but the cost of solar panels has been dropping by 7% annually for the last 30 years. If this trend continues, by 2030 solar electricity will cost half that of coal electricity.
The other limitation is transportation. Physical distance creates expense and stress. Yet, better technologies over the last hundred years have revolutionized transportation and completely changed the face of cities. The vast majority of adults either drive or use efficient mass transit systems. A hundred years ago, we used horses. In twenty years, we will use self-driving cars. In forty years, better navigational AI and nanotech will allow aircars. Flying cars will definitely be developed -- they will just need to be piloted by software programs smart enough to do all the work.
To spread out without destroying the environment, our manufacturing processes will have to be made clean. There are two ideal ways to go about this: growing products with synthetic biology and molecular manufacturing. In the event that both of these methods prove intractable, advanced robotics alone will allow for highly automated and precise manufacturing processes without waste.
The planet is not crowded! Our technology just sucks. More efficient technology is also better for the environment. This is not a choice. We either develop the technology we need to live anywhere, or suffer in increasingly cramped cities.
Human health, however, is sorely lacking. Those in developing countries suffer from terrible diseases, while many in developed countries are overweight and cannot exert themselves. Only the wealthiest 1% in the world can afford healthy, diverse, flavorful foods. 150,000 die per day from age-related disease. 20,000 from heart disease, 17,000 from stroke, 3,450 from traffic accidents, 3,400 die from malaria.
What can cure these maladies? Science and medicine.
The key to medicine is making people that don't get sick to begin with. Many of the plagues on human health can be viewed as special cases of the general problem that the cells of the body are not reprogrammable or replaceable. The body naturally reprograms and replaces cells, but is eventually overcome. We must amplify the natural ability of the body to deal with disease and the ravages of aging. There are two ways in which this may be thoroughly accomplished: artificial cells or microscale robots.
Tiny machines called MEMS have already been implanted into the human body thousands of times, and have a wide range of desirable properties for medicine. To augment the immune system, these machines will have to be much more sophisticated. Robert Freitas has designed a wide variety of microscale machines for improving human health, including artificial red and white blood cells. To fabricate these machines will require nanoscale manufacturing.
An artificial cell with a non-standard design might be made impervious to pathogens, which rely on certain biological universals which could be modified in artificial cells. Cells artificially produced using the patient's genetic code would be at home in the body and provide superior disease immunity and longevity. Artificial stem cells could be introduced to tissue to produce these new cells indefinitely. Yet, artificial cells do not currently exist. PACE, programmable artificial cell evolution, a project funded by the EU in 2004-2008, did some interesting work, but a true programmable artificial cell is still a ways off. Given the tremendous demand there would be for such cells, their eventual development seems highly likely. Cells are already being genetically reprogrammed for a variety of purposes in plants and animals. Artificial cells are the next step. If we can reprogram our own cells quickly, disease can be averted, possibly completely.
The most crucial necessity of life is water. Millions suffer and die without it. Vast tracts of good land are empty and dead because of its absence. In areas, good water can be expensive. Some geopolitical experts foresee wars based on water.
Though you might think that developed countries like the United States have the issue of water squared away, we certainly don't. Here's an example: this spring, it looked like the US corn crop was going to be a record-breaker. By late July, a combination of drought and heat caused the US corn crop to shrivel and experience a 10-year low. US corn is the foundation on which much of the world's food supply is based. Cheaper and more plentiful water could have saved the corn from drought. Furthermore, water demand is expected to exceed in supply in more than 10 US cities by 2050.
Civilization is closely tied to the plentiful availability of fresh water. Modern societies cannot exist without it. Because water is such a foundational aspect of human existence, technologies that increase its availability can improve quality of life greatly. Making water more available would also allow us to colonize more remote places, addressing the issue of open space.
A few examples of currently existing water technologies that could be game-changing include nano-filters, machines that extract water from the air, and waterproof sand. And once you have water, you can usually grow food.
Perhaps the most exciting water technology are machines that extract water from air, called atmospheric water generators. Darpa put millions towards getting these developed, and after years of no progress, there was finally a breakthrough. These devices were only invented in 2006. A $50,000 machine can extract 10,000 liters of water from the air a day! In arid regions such as the deserts of Iraq, a similar machine can extract 2,200 gallons a day. A machine that costs a mere $1,300 can extract 20 liters a day -- enough for plenty of applications. A machine that extracts 5,000 liters a day costs $170,000, and a version that runs entirely on solar power -- no power input needed! -- is $250,000. This is brand new stuff, and very exciting. It's enough to make an oasis in the middle of a desert. The Sahara Forest Project is doing exactly that.
Eventually death is inevitable, but it staving it off for as long as possible seems like a good plan. In ancient Rome, the average lifespan was 28. In 1900 in the US, the average lifespan was only 47. By 2010, it was 78. This means that the average lifespan during the 20th century increased more than a quarter of a year per year!
This didn't happen by magic -- it happened through science. Vaccines, antibiotics, modern agriculture, and many hundreds of thousands of facets of modern medicine were all developed throughout the 20th century. And the process isn't slowing down. The longer we live, the longer we continue to live. Someone born in 1980 may expect to live 70 years, to 2050, but if lifespans continue lengthening at the historic rate, that person's expected lifespan would actually be 100, allowing them to live all the way to 2080!
Life is a beautiful thing. The human body is just a complex machine. If it can be repaired faster than it breaks down, we could live very long lives -- perhaps hundreds of years, maybe even thousands. There are no fundamental principles of nature preventing this from happening. We are just taught a lot of comforting lies about the metaphysical meaning of death to make it easier to swallow. The zeitgeist gives our current lifespans a level of inherent mystique and meaning they don't actually have.
Our bodies break down and age for seven clearly definable reasons: cancer, mutations in mitochondria, junk inside cells, junk outside cells, cell loss, cells losing the ability to divide, and extracellular crosslinks. The last of these was discovered in the 1970s, and not a single additional source of age-related damage has been identified in all of medicine since then, so it seems likely that this list is comprehensive. If we can "just" solve these problems, then we may find ourselves in a society where people only die from diseases, war, or accidents. If that could be achieved, the average lifespan could be 800-900 or more, depending on the frequency of war and disease.
"Immortality" is actually not all that rare in nature. A feature on turtles in the June 2002 issue of Discover magazine asked, "Can Turtles Live Forever?", explored the possibility that turtles do not age conventionally, but simply die due to accidents and disease that affect turtles at all ages equally. An influential monograph published in 2008 developed the theory behind this in detail. Caleb Finch, a professor in the neurobiology of aging at USC, has proposed rockfish, turtles, and bristlecone pine as candidate species that do not age.
It could be that we are not far from beginning to develop cures for the major causes of aging. Within a few years, the genomes for all the candidate species exhibiting very long lifespans will be sequenced, and we will gain insight into what gives these species such long lives. There may be certain proteins or metabolic tricks that they utilize to stave off age-related decline. If these are identified, they could lead to drugs or other therapies that radically extend human lifespans. Not all age-related damage needs to be repaired for organisms to live indefinitely -- damage must simply be repaired at a faster rate than it accumulates. Thus, people living in a society where average lifespan is extended by more than one year per year would enjoy indefinite lifespans, even though no "elixir of immortality" or any such thing had been developed. Some of us alive today might live to enjoy such a society.
For more scientific detail on this, see an article I wrote in 2009.
If we had all the space, health, water, and lifespan in the world, what would we be missing? Intelligence. Not just intelligence as in book smarts, but intelligence in the more sublime sense of understanding each other and the world around us on a visceral level. "Compassion" is a sub-category of the kind of intelligence I am talking about.
We tend to think of "intelligence" as running on a scale from village idiot to Einstein, but in reality, all of humanity is just a little dot on a huge scale of intelligence ranging from worms to posthuman superintelligences. The fact that our species found itself at this particular level of intelligence is just a cosmic accident. If our planet were a more dangerous place, humans would have been forced to evolve higher levels of intelligence just to cope with the perils of leaving the forest. Why can we store only 4-9 items in working memory and not 27-30? The answer is that we live on an arbitrary planet and an arbitrary level of intelligence was reached by humanity which enabled us to build a civilization.
Why can't we reprogram our own brains? Why didn't we launch the Industrial Revolution 100,000 years ago instead of 350 years ago? Why don't we immediately "get" complex concepts? We contrive mystical-sounding reasons for explaining away our characteristic level of intelligence as a species, and rarely even think about it because everyone in the species has the same limitations.
These limitations need not last forever. Imagine being able to perceive 50-dimensional objects, or colors in the infrared and ultraviolet ranges. Imagine being able to appreciate the subtle connections between millions of different domains of art or science rather than a few dozen. In principle, all of this could be possible. We'd have to augment our brains somehow, possibly with brain-computer interfaces, or maybe through more organic approaches. This is a line of research that is already in progress, and interesting results are being achieved every year.
Although the concept of brain-computer interfaces makes some of us squirm, the brain-computer interfaces of the future would have to be non-invasive and safe to be practical at all. To interface with millions of micron-sized neurons, a system would have to be delicate and sophisticated. It may be possible to coax the natural gene expression networks in the brain to produce more neurons or configure them in better arrangements. What nature gave us is not necessarily the most ideal brain or mind -- just what was practical for it at the time. We should regard the intellect of Homo sapiens as a good first draft -- but improvements on that draft are inevitable.
People alive today are different than the generations that come before us -- we have greater expectations of the world and reality itself. Instead of merely surviving, we strive towards a higher cosmic purpose rooted in science and logic instead of superstition and dogma. Science and technology are giving us the tools to create a paradise on Earth. We can use them for that, or use them to blow each other to smithereens. The choice is ours.
super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.
"Human-equivalent AI is unlikely" is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I'll explain why in the next post.) Stross might not understand that the term "human-equivalent AI" always means AI of human-equivalent general intelligence, never "exactly like a human being in every way".
If Stross' objections turn out to be a problem in AI development, the "workaround" is to create generally intelligent AI that doesn't depend on primate embodiment or adaptations.
Couldn't the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?
I don't get the point of the last couple sentences. Why not just pursue general intelligence rather than "enhancements to primate evolutionary fitness", then? The concept of having "motivations of its own" seems kind of hazy. If the AI is handing me my ass in Starcraft 2, does it matter if people debate whether it has "motivations of its own"? What does "motivations of its own" even mean? Does "motivations" secretly mean "motivations of human-level complexity"?
I do have to say, this is a novel argument that Stross is forwarding. Haven't heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is "unlikely", presumably indefinitely "unlikely". In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn't find much -- mainly just Dreyfuss' What Computers Can't Do and the people who argued against Kurzweil in Are We Spiritual Machines? "Human level AI is unlikely" is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.
(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense "conscious"? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers â€” it's possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences ... lest we inadvertently open the door to inhumane uses of human beings as well.)
I don't think these are "showstoppers" -- there is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.
We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos.
All it takes is for some people to build a "volitional" AI and there you have it. Even if 99% of AIs are tools, there are organizations -- like the Singularity Institute -- working towards AIs that are more than tools.
If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to â€” in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.
YOU want AI to be like this. WE want AIs that do "try to bootstrap [themselves]" to a "higher level". Just because you don't want it doesn't mean that we won't build it.
For billions of years on this planet, there were no rules. In many places there still are not. A wolf can dine on the entrails of a living doe he has brought down, and no one can stop him. In some species, rape is a more common variety of impregnation than consensual sex. Nature is fucked up, and anyone who argues otherwise has not actually seen nature in action.
This modern era, with its relative orderliness and safety, at least in the West, is an aberration. A bizarre phenomenon, rarely before witnessed in our solar system since its creation. Planetwide coordination is something that just didn't happen until the invention of the telegraph and radio made it possible.
America and Western Europe are full of the most security-deluded people of all. The most recent generations, growing up without any major global conflict -- Generation X and Y -- are practically as ignorant as you can get. Thousands of generations of tough-as-nails people underwent every manner of horrors to incrementally build the orderly and safe society many of us have the luxury of inhabiting today, and the vast majority of Generation X and Y neither appreciate nor understand that.
Wilsonian idealism, in particular, proved to be a turning point in the way Americans think about social interaction on a wide scale. Wilson was one of the first leaders to argue that national actions should be based on approximating some benevolent global goal or ideals rather than narrow national interest. This is not a terrible idea in principle, but without the brutal threat of military force or economic intimidation, it can't be carried out. High ideals are a luxury purchased with the currency of de facto and de jure power. De jure power itself is just a fabrication, a consensual illusion that draws all its strength from de facto power to persist, like a flower depends on its roots and stem.
A defenseless peasant of the Middle Ages could talk all he wanted about treating thy neighbor as thyself, kindness, sharing, reasonableness -- whatever. It wouldn't necessarily stop a power-mad knight from riding onto his land the next day, chopping off his head, taking his wife, and setting fire to his house and fields.
Benevolence, to flourish, should be promoted with words and ideas, but also force. Ultimately, people often choose to be stubborn and ignore all words. There are also those who pretend to go along but coordinate to violate norms discretely, usually with thinly veiled humor.
Security is the foundation of everything else. Free speech, including the ability to criticize the government and military, only exists because the highest power in the land permits it. If it's a God-given right, God had a funny way of implementing it when denied it from all his subjects by default for thousands of years, living under feudal rule and local warlords or strongmen.
Security does not come easy, since there are many people who will violate it any chance they can get for personal gain. Perhaps there exist some aliens who naturally cooperate peacefully, but we are not them. If anything, human beings are more bloodthirsty and warlike than most species, not less. Or, you could say we have a wider variance of behavior -- the ability to be highly cooperative as well as highly uncooperative.
Humanity's tendency to break apart unless constantly under self-vigilance will become an even greater liability for us when the Pandora's Box of Transhumanism is finally opened in the 2030s and 2040s. There are many people in the world interested in technology for only one reason -- to give them a better opportunity to screw over their enemies.
This urge in humanity is simply too omnipresent and intense to be reconciled or eliminated in the very short 20 or 30 years we have before things start to get more intense technologically. We can count on it being there, just as it has been there for thousands of years. The question is what sort of order, or disorder, will emerge when some human beings become radically more powerful than others.
There is a reason why conservatives are afraid of change. If the status quo is seen as acceptable, change makes things worse. Most possible changes, arguably, do. Every improvement is necessarily a change, however, so change is necessary if we are going to improve.
Some transhumanists confront of the challenge of massive power asymmetry like children. They see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly. Like children, they have a temper tantrum at any suggestion that the candy could have negative effects as well as positive ones.
Transhumanists have to grow up. The world is not your candy basket. The technologies we are pushing towards could lead to our demise just as easily as our salvation. You and everything you love could be eliminated by the technologies you were so excited about in the 2010s and 2020s.
A cognitive transhuman, in particular, will be a bewitching thing. Someone who thinks faster than you, understands what your microexpressions mean, and has superior predictive theories of both the natural and artificial world will be able to solve "impossible" problems with some regularity. Detectives and the FBI do not primarily solve cases with guns, but with their minds. Superior transhuman minds will run circles around the merely human minds in law enforcement and the FBI, unless the latter has the equivalent or better intelligence enhancement technology.
The intelligence arms race has the potential to get uglier faster than any merely physical arms race before it. An intelligence with access to its own mind, under threat, will have an incentive to actually boost its paranoia through neural self-modification. Psychological extremes never imagined will become routine states for the most experimental and ambitious of the new self-enhancers. They will have every incentive to downplay their accomplishments, hide their abilities, and they will succeed.
The second we create an intelligence superior to ourselves, the world could become fundamentally unsafe in a new way. The delicate balance of roughly human-level intelligence will be broken. All rules will be thrown out the window. Transhumans will not feel intimidated by the threats of humans. This is a really good thing if they are on our side, a really bad thing if not. The choices we make in creating the first transhumans will determine whether they are on our side or not in the longer term. The great tree of the Transhuman World will be grown by the seed we plant today.
The future is not exciting and optimistic. The future is dark and uncertain, imbued with the heavy sense of responsibility we personally have to make things go well. Reflecting back on this century, if we survive, we will care less about the fun we had, and more about the things we did to ensure that the most important transition in history went well for the weaker ambient entities involved in it. The last century didn't go too well for the weak -- just ask the victims of Hitler and Stalin. Hitler and Stalin were just men, goofballs and amateurs in comparison to the new forms of intelligence, charisma, and insight that cognitive technologies will enable.
Yale math major Thomas McCabe, 19, is applying for a Thiel grant. McCabe hopes to commercialize low-cost 3-D printers that now make a range of plastic goods on demand. "We are living among the ruins of a fallen civilization," he says, sounding a lot like Thiel must have 24 years ago. "Take all of the basic infrastructure, our roads and bridges and so on that we built in the 1950s and '60s. If we tried to build them now we couldn't do it." But with a grubstake from Thiel we might get a little closer.
Fun to see ideas that begin as quirky conversations among SIAI employees and visiting fellows find their way into cover stories on Forbes!
The second paragraph of the article indirectly references SENS, Seasteading Institute, Singularity Institute, and Halcyon Molecular. In the last few years, I have worked or consulted for all these orgs except the Seasteading Institute.
It would be easy to write off Thiel as a "wackaloon," as one political blogger has called him. Indeed, Thiel is putting serious money behind companies and groups bent on extending life, colonizing on ocean platforms, commercializing space, promoting so-called friendly artificial intelligence and leapfrogging DNA sequencing, among other causes. Freedom, he has said, is incompatible with democracy. In one of his most provocative acts, he has offered hundreds of thousands of dollars to college kids if they drop out of school and start a business or pursue a breakthrough. "People think of the future as something other people do," Thiel says backstage at a December philanthropic fundraiser in San Francisco. "But there's something weirder about a society where people don't think about the future."
That's the society we live in. "Contemplating the future" consists of wondering what you're going to do next weekend.
In a recent letter written to John Rennie responding to his recent critique of Ray's predictions, Kurzweil defended himself and his predictions, and most importantly, linked to this. This huge document is over 150 pages long and packed with cool images and facts.
Kurzweil hits back at Rennie:
While I appreciate some of the things John Rennie has to say, his review of my predictions is filled with inaccuracies, including misquotes of mine, and misunderstandings of the meaning of my words and the reality of today's technology. For starters, he takes note of my point about selection bias, but his entire article suffers from this bias. While he acknowledges that I wrote over 100 predictions for 2009, in a book I wrote in the late 1990s, he only talks about a handful of them. And he persistently gets these wrong. He writes that I predicted "widespread, foolproof, real-time speech translation." We do in fact have real-time speech translation in the form of popular phone apps. But who ever said anything about "foolproof?" Rennie just made that up like a lot of the factoids in this article. Not even human translators are foolproof. Apparently that has now been removed from the online version.
It's true that the only way to really figure anything out is look at each prediction one by one, as Ray has now done. I haven't read the analysis yet but it looks very impressive. Here's the punchline on page 5:
As I discuss in detail below, I made 147 predictions for 2009 in ASM, which I wrote in the 1990s. Of these, 115 (78 percent) are entirely correct as of the end of 2009, and another 12 (8 percent) are â€•essentially correct (see below) â€” a total of 127 predictions (86 percent) are correct or essentially correct. Another 17 (12 percent) are partially correct, and 3 (2 percent) are wrong.
I have a fairly simple idea for a new kind of wheel that I will describe to you now. It's not really possible to build a very good one with today's technology, but it seems as if it could be possible with more advanced fullerene-based robotics.
I got the idea for this wheel while reading about Usain Bolt and the possible limits of human speed. One of the obvious factors that determines speed is the total amount of force applied to the ground per time interval. Humans and other animals with legs can only contact the ground as many times as they have legs per running cycle, limiting the amount of force they can apply.
The classic workaround to this limitation is the wheel, which can apply constant force to the ground as long as its power source holds out. Of course, the wheel has its weaknesses. A wheel can't operate efficiently over uneven ground, and can't scale certain obstacles. The solution is to create a "wheel" that consists of a bundle of tentacles, or "whiskers" which can lock together, become rigid, and behave like a solid wheel while moving over flat ground, but can unlock and independently articulate when moving over rougher terrain.
This concept takes the strengths of the wheel and bipedal/quadrupedal/tentacle locomotion and merges them into a single system. The idea wouldn't work too well with present-day robotics because 1) the fine coordination and control required between the whiskers to merge into a wheel or detach from one another and articulate smoothly over uneven terrain would be a huge challenge by current standards, 2) miniaturization and nanotechnology has not yet advanced to the point where a thin, strong tendril or whisker can quickly be changed from flexible to rigid in a fraction of a second (magnets are not good enough, it needs to be mechanical), 3) the idea works best when the power-to-weight ratio of engines can be improved beyond today's current standards, and when engines can be made small enough to be installed into the whiskers themselves.
If all these requirements were met, however, you'd have quite a system. Locomotion based on tentacles alone would be very effective for scrambling over rough terrain, locomotion based on wheels alone would be good for the highway, but what if I need both? The whiskerwheel can adjust to be more wheel-like or more tentacle-like based on the demands of the moment. Nano-cilia and lubricants could be used to keep the interfaces between the whiskers clean so they slide past each other fluidly when necessary. The whisker format would also allow the wheel to increase the surface area of its contact with the ground beyond a typical wheel fitting in the same space, improving traction and increasing the total amount of force applied to the ground, increasing speed.
You could even build a robotic system that simply is a whiskerwheel, rather than using a whiskerwheel with a conventional axle-based mounting. A system like that would be a sort of robotic shoggoth.
The future will not take care of itself, our success is not inevitable, and it is your responsibility to help craft a wonderful future. Neglect this task, and there will be plenty of consequences to go around.
Here's the article from yesterday's San Jose Mercury News:
Silicon Valley billionaire Peter Thiel worries that people aren't thinking big enough about the future.
So he's convening an unusual philanthropic summit Tuesday night , where he'll introduce other wealthy tech figures to nonprofit groups exploring such futuristic -- some might say "far out" -- ideas as artificial intelligence, the use of "rejuvenation biotechnologies" to extend human life and the creation of free-floating communities on the high seas.
"We're living in a world where people are incredibly biased toward the incremental," said Thiel, explaining that he wants to challenge his peers to pursue more "radical breakthroughs" in their philanthropy, by supporting nonprofit exploration of technological innovations that carry at least the promise of major advances for the human condition.
"Obviously there are a lot of questions about the impact of these things," he added. "If you have radical life extension, that could obviously lead to repercussions for society. But I think that's a problem we want to have."
The 43-year-old financier and philanthropist, who made a fortune as co-founder of PayPal and an early backer of Facebook, will make his pitch to more than 200 well-heeled entrepreneurs and techies during an invitation-only dinner at the Palace of Fine Arts in San Francisco.
I'm missing this event because I'm attending the Society for Risk Analysis annual conference in SLC, where I just gave a talk. I wish the best to all my colleagues attending the event, however. Here's another Thiel quote I liked:
"One of the things that's gone strangely wrong in the United States is that the future is not really being thought about as a major idea anymore," he added.
Simple but true. I wasn't alive in the 50s or 60s so I don't know exactly what it was like, but from what I've read, people cared a lot more about the future. From the 70s onward, the emphasis seems to be more on the past.
Our minds have two very different modes (and a range between). We model important things nearby in more detail than less important things far away. The more nearby aspects we notice in a thing, the more other nearby aspects and relevant detail we assume it has. On the other hand, the more far aspects we see in something, the more other far aspects we assume it has, and the more we reason about it via broad categories and relations.
Since the future is far in time, thinking about it tends to invoke a far mode of thought, which introduces other far mode defaults into our image of the future. And thinking about the far future makes us think especially far. Of course many other considerations influence any particular imagined future, but it can help to understand the assumptions your mind is primed to make about the far future, regardless of whether those assumptions are true.
Here's the article, by John Rennie. Quote:
It seems only fair to allow some latitude for interpretation on the dates. But even then, it is hard to define the rightness or wrongness of Kurzweil's predictions.
Kurzweil himself has no such difficulty, however. He knows precisely how well he's doing. Last January, Michael Anissimov of the Accelerating Future Web site posted an item in which he suggested that seven of Kurzweil's predictions for 2009 seemed to be wrong. Kurzweil replied with a note that argued it was wrong to single out merely seven predictions when he had actually made 108 in The Age of Spiritual Machines.
"I am in the process of writing a prediction-by-prediction analysis of these, which will be available soon and I will send it to you," he wrote. "But to summarize, of these 108 predictions, 89 were entirely correct by the end of 2009." Another 13 were "essentially correct," by which he meant that they would be realized within just a few years. "Another 3 are partially correct, 2 look like they are about 10 years off, and 1, which was tongue in cheek anyway, was just wrong," he wrote. So by his own scoring, he is at least 94.4 percent accurate.
Brian Wang says, "IEEE Spectrum tries to hold Ray Kurzweil to a high prediction standard but does not apply that standard to themselves". Brian, this is Rennie's first article at IEEE so the criticism doesn't exactly apply. IEEE is not a unified entity necessarily, it's a forum, where people of sufficiently high status can post. A lot of organizations are like that, including the World Future Society. They have no unified identity.
I'm a big fan of Ray Kurzweil. Visiting his website got me involved in the Singularity Institute and put me where I am today. He inspired me, deeply. Anyone who doesn't read The Singularity is Near is not a serious futurist. Still, plenty of his predictions for 2010 were obviously premature. I consider it probable that most if not all of them will come true by 2020, however.
The thing about futurism is that the traditional success rate is so abysmal that even a success rate of 60-70% ought to be considered extremely high.
Al Fin's comments on the PZ Myers/Kurzweil tiff:
Lost in all the ballyhoo is the obvious fact that in reality, neither Kurzweil nor Myers understand very much about the brain. But is that clear fact of mutual brain ignorance relevant to the underlying issue -- Kurzweil's claim that science will be able to "reverse-engineer" the human brain within 20 years? In other words, Ray Kurzweil expects humans to build a brain-functional machine in the next 2 decades based largely upon concepts learned from studying how brains/minds think.
Clearly Kurzweil is not claiming that he will be able to understand human brains down to the most intricate detail, nor is he claiming that his new machine brain will emulate the brain down to its cell signaling proteins, receptors, gene expression, and organelles. Myers seems to become a bit bogged down in the details of his own objections to his misconceptions of what Kurzweil is claiming, and loses the thread of his argument -- which can be summed up by Myers' claim that Kurzweil is a "kook."
But Kurzweil's amazing body of thought and invention testifies to the fact that Kurzweil is probably no more a kook than any other genius inventor/visionary. Calling someone a "kook" is apparently considered clever in the intellectual circles which Mr. Myers' and the commenters on his blog travel, but in the thinking world such accusations provide too little information to be of much use.
Past a certain level of popularity, the intellectual standards go to crap. Kurzweil is misleading on some points (the primary factor here is that Gizmodo misquoted him), but it also doesn't take much of a deep critique from Myers to get his readers to chortle in affirmation.