The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.
If this process manages to give you a sufficiently accurate simulation
Prof. Myers objected vociferously, writing, "It won’t. It can’t.", subsequently launching into a reasonable attack against the notion of scanning a living human brain at nanoscale resolution with current fixation technology. The confusion is that Prof. Myers is criticizing a highly specific idea, the notion of exhaustively simulating every axon and dendrite in a live brain, as if that were the only proposal or even the central proposal forwarded by Sandberg and Bostrom. In fact, on page 13 of the report, the authors present a table that includes 11 progressively more detailed "levels of emulation", ranging from simulating the brain using high-level representational "computational modules" to simulating the quantum behavior of individual molecules. In his post, Myers writes as if the 5th level of detail, simulating all axons and dendrites, is the only path to whole brain emulation (WBE) proposed in the report (it isn't), and also as if the authors are proposing that WBE of the human brain is possible with present-day fixation techniques (they aren't).
In fact, the report presents Whole Brain Emulation as a technological goal with a wide range of possible routes to its achievement. The narrow method that Myers criticizes is only one approach among many, and not one that I would think is particularly likely to work. In the comments section, Myers concurs that another approach to WBE could work perfectly well:
This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.
An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.
But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.
But, the report does not mandate that a "brute force dismantling and reconstruction of every cell in the brain" is the only way forward for uploading. This makes it look as if Myers did not read the report, even though he claims, "I read the paper".
Slicing and scanning a brain will be necessary but by no means sufficient to create a high-detail Whole Brain Emulation. Surely, it is difficult to imagine how the salient features of a brain could be captured without scanning it in some way.
What Myers seems to be objecting to is a kind of dogmatic reductionism, "brain in, emulation out" direct scanning approach that is not actually being advocated by the authors of the report. The report is non-dogmatic, writing that a two-phase approach to WBE is required, where "The first phase consists of developing the basic capabilities and settling key research questions that determine the feasibility, required level of detail and optimal techniques. This phase mainly involves partial scans, simulations and integration of the research modalities." In this first phase, there is ample room for figuring out what the tissue actually does. Then, that data can be used simplify the scanning and representation process. The required level of understanding vs. blind scan-and-simulate is up for debate, but few would claim that our current neuroscientific level of understanding suffices.
Describing the difficulties of comprehensive scanning, Myers writes:
And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue?
Measuring the epigenetic state of every nucleus is not likely to be required to create convincing, useful, and self-aware Whole Brain Emulations. No neuroscientist familiar with the idea has ever claimed this. The report does not claim this, either. Myers seems to be inferring this claim himself through his interpretation of Hallquist's brusque 2-sentence summary of the 130-page report. Hallquist's sentences need not be interpreted this way -- "slicing and scanning" the brain could be done simply to map neural network patterns rather than to capture the epigenetic state of every nucleus.
Next, Myers objects to the idea that brain emulations could operate at faster-than-human speeds. He responds to a passage in "Intelligence Explosion: Evidence and Import", another paper cited in the Hallquist post which claims, "Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly." To this, Myers says:
You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed… how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.
At first read, it almost seems in this objection as if Prof. Myers does not understand the concept that software can be run faster if it is running on a faster computer. After reading this post carefully, it doesn't seem as if this is what he actually means, but since the connotation is there, the point is worth addressing directly.
Software is a series of electric signals passing through logic gates on computers. The software is agnostic to the processing speed of the underlying computer. The software is a pattern of electrons. The pattern is there whether the clock speed of the processor is 2 kHz or 2 GHz. When and if software is ported from a 2 kHz computer to a 2 GHz computer, it does not stand up and object to this "tweaking the clock speed". No "waving of hands" is required. The software may very well be unable to detect that the substrate has changed. Even if it can detect the change, it will have no impact on its functioning unless the programmers especially write code that makes it react.
Speed change in software is allowed. If the hardware can support the speed change, pressing a button is all it takes to speed the software up. This is a simple point.
The crux of Myers' objection seems to actually be about the interaction of the simulation with the environment. This objection makes much more sense. In the comments, Carl Shulman responds to Myers' objection:
This seems to assume, contrary to the authors, running a brain model at increased speeds while connected to real-time inputs. For a brain model connected to inputs from a virtual environment, the model and the environment can be sped up by the same factor: running the exact same programs (brain model and environment) on a faster (serial speed) computer gets the same results faster. While real-time interaction with the outside would not be practicable at such speedup, the accelerated models could still exchange text, audio, and video files (and view them at high speed-up) with slower minds.
Here, there seems to be a simple misunderstanding on Myers' part, where he is assuming that Whole Brain Emulations would have to be directly connected to real-world environments rather than virtual environments. The report (and years of informal discussion on WBE among scientists) more or less assumes that interaction with the virtual environment would be the primary stage in which the WBE would operate, with sensory information from an (optional) real-world body layered onto the VR environment as an addendum. As the report describes, "The environment simulator maintains a model of the surrounding environment, responding to actions from the body model and sending back simulated sensory information. This is also the most convenient point of interaction with the outside world. External information can be projected into the environment model, virtual objects with real world affordances can be used to trigger suitable interaction etc."
It is unlikely that an arbitrary WBE would be running at a speed that lines it up precisely with the 200 Hz firing rate of human neurons, the rate at which we think. More realistically, the emulation is likely to be much slower or much faster than the characteristic human rate, which exists as a tiny sliver in a wide expanse of possible mind-speeds. It would be far more reasonable -- and just easier -- to run the WBE in a virtual environment with a speed suited to its thinking speed. Otherwise, the WBE would perceive the world around it running at either a glacial pace or a hyper-accelerated one, and have a difficult time making much sense of either.
Since the speed of the environment can be smoothly scaled with the speed of the WBE, the problems that Myers cites with respect to "turn[ing] it up to 11" can be duly avoided. If the mind is turned up to 11, which is perfectly possible given adequate computational resources, then the virtual environment can be turned up to 11 as well. After all, the computational resources required to simulate a detailed virtual environment would pale in comparison to those required to simulate the mind itself. Thus, the mind can be turned up to 11, 12, 13, 14, or far beyond with the push of a button, to whatever level the computing hardware can support. Given the historic progress of computing hardware, this may well eventually be thousands or even millions of times the human rate of thinking. Considering minds that think and innovate a million times faster than us might be somewhat intimidating, but there it is, a direct result of the many intriguing and counterintuitive consequences of physicalism.
There isn't enough in the world.
Not enough wealth to go around, not enough space in cities, not enough medicine, not enough intelligence or wisdom. Not enough genuine fun or excitement. Not enough knowledge. Not enough solutions to global problems.
What we need is more. And we need it soon. The world population is doubling every 34 years. Instead of turning back the clock, we must move towards the future.
There is a bare minimum that we should demand out of the future. Without this bare minimum, we're just running in place. Here is what I think that minimum is:
1) More space
2) More health
3) More water
4) More time
5) More intelligence
There is actually a lot of space on this earth. About 90 million square kilometers of land isn't covered in snow or mountains. That's about 5,000 times larger than the New York City metro area. Less than 1% of this land has any appreciable population density. Everywhere outside of Europe, there are vast districts the size of Texas with no more than a few thousand people. The world is "crowded" because of logistics, not space.
The main constraints on space are transportation and infrastructure rather than lack of actual land. Most population centers are around the coast and its natural harbors. Is this because the rest of the land is uninhabitable? No. It's because being on the coast drives the local economy. What if you can drive the economy without the coast?
With better technologies, we can decentralize infrastructure and spread out more. The most important factors are energy and water. If you can secure these and cheap transportation, many areas can be made habitable.
For energy, the only way to get around centralized solutions is to make your own. Looking forward 10-20 years, this means solar panels. Only solar panels have the versatility needed to work anywhere. Full-spectrum solar panels can generate energy even if the sky is grey. Right now, the main barrier is cost, but the cost of solar panels has been dropping by 7% annually for the last 30 years. If this trend continues, by 2030 solar electricity will cost half that of coal electricity.
The other limitation is transportation. Physical distance creates expense and stress. Yet, better technologies over the last hundred years have revolutionized transportation and completely changed the face of cities. The vast majority of adults either drive or use efficient mass transit systems. A hundred years ago, we used horses. In twenty years, we will use self-driving cars. In forty years, better navigational AI and nanotech will allow aircars. Flying cars will definitely be developed -- they will just need to be piloted by software programs smart enough to do all the work.
To spread out without destroying the environment, our manufacturing processes will have to be made clean. There are two ideal ways to go about this: growing products with synthetic biology and molecular manufacturing. In the event that both of these methods prove intractable, advanced robotics alone will allow for highly automated and precise manufacturing processes without waste.
The planet is not crowded! Our technology just sucks. More efficient technology is also better for the environment. This is not a choice. We either develop the technology we need to live anywhere, or suffer in increasingly cramped cities.
Human health, however, is sorely lacking. Those in developing countries suffer from terrible diseases, while many in developed countries are overweight and cannot exert themselves. Only the wealthiest 1% in the world can afford healthy, diverse, flavorful foods. 150,000 die per day from age-related disease. 20,000 from heart disease, 17,000 from stroke, 3,450 from traffic accidents, 3,400 die from malaria.
What can cure these maladies? Science and medicine.
The key to medicine is making people that don't get sick to begin with. Many of the plagues on human health can be viewed as special cases of the general problem that the cells of the body are not reprogrammable or replaceable. The body naturally reprograms and replaces cells, but is eventually overcome. We must amplify the natural ability of the body to deal with disease and the ravages of aging. There are two ways in which this may be thoroughly accomplished: artificial cells or microscale robots.
Tiny machines called MEMS have already been implanted into the human body thousands of times, and have a wide range of desirable properties for medicine. To augment the immune system, these machines will have to be much more sophisticated. Robert Freitas has designed a wide variety of microscale machines for improving human health, including artificial red and white blood cells. To fabricate these machines will require nanoscale manufacturing.
An artificial cell with a non-standard design might be made impervious to pathogens, which rely on certain biological universals which could be modified in artificial cells. Cells artificially produced using the patient's genetic code would be at home in the body and provide superior disease immunity and longevity. Artificial stem cells could be introduced to tissue to produce these new cells indefinitely. Yet, artificial cells do not currently exist. PACE, programmable artificial cell evolution, a project funded by the EU in 2004-2008, did some interesting work, but a true programmable artificial cell is still a ways off. Given the tremendous demand there would be for such cells, their eventual development seems highly likely. Cells are already being genetically reprogrammed for a variety of purposes in plants and animals. Artificial cells are the next step. If we can reprogram our own cells quickly, disease can be averted, possibly completely.
The most crucial necessity of life is water. Millions suffer and die without it. Vast tracts of good land are empty and dead because of its absence. In areas, good water can be expensive. Some geopolitical experts foresee wars based on water.
Though you might think that developed countries like the United States have the issue of water squared away, we certainly don't. Here's an example: this spring, it looked like the US corn crop was going to be a record-breaker. By late July, a combination of drought and heat caused the US corn crop to shrivel and experience a 10-year low. US corn is the foundation on which much of the world's food supply is based. Cheaper and more plentiful water could have saved the corn from drought. Furthermore, water demand is expected to exceed in supply in more than 10 US cities by 2050.
Civilization is closely tied to the plentiful availability of fresh water. Modern societies cannot exist without it. Because water is such a foundational aspect of human existence, technologies that increase its availability can improve quality of life greatly. Making water more available would also allow us to colonize more remote places, addressing the issue of open space.
A few examples of currently existing water technologies that could be game-changing include nano-filters, machines that extract water from the air, and waterproof sand. And once you have water, you can usually grow food.
Perhaps the most exciting water technology are machines that extract water from air, called atmospheric water generators. Darpa put millions towards getting these developed, and after years of no progress, there was finally a breakthrough. These devices were only invented in 2006. A $50,000 machine can extract 10,000 liters of water from the air a day! In arid regions such as the deserts of Iraq, a similar machine can extract 2,200 gallons a day. A machine that costs a mere $1,300 can extract 20 liters a day -- enough for plenty of applications. A machine that extracts 5,000 liters a day costs $170,000, and a version that runs entirely on solar power -- no power input needed! -- is $250,000. This is brand new stuff, and very exciting. It's enough to make an oasis in the middle of a desert. The Sahara Forest Project is doing exactly that.
Eventually death is inevitable, but it staving it off for as long as possible seems like a good plan. In ancient Rome, the average lifespan was 28. In 1900 in the US, the average lifespan was only 47. By 2010, it was 78. This means that the average lifespan during the 20th century increased more than a quarter of a year per year!
This didn't happen by magic -- it happened through science. Vaccines, antibiotics, modern agriculture, and many hundreds of thousands of facets of modern medicine were all developed throughout the 20th century. And the process isn't slowing down. The longer we live, the longer we continue to live. Someone born in 1980 may expect to live 70 years, to 2050, but if lifespans continue lengthening at the historic rate, that person's expected lifespan would actually be 100, allowing them to live all the way to 2080!
Life is a beautiful thing. The human body is just a complex machine. If it can be repaired faster than it breaks down, we could live very long lives -- perhaps hundreds of years, maybe even thousands. There are no fundamental principles of nature preventing this from happening. We are just taught a lot of comforting lies about the metaphysical meaning of death to make it easier to swallow. The zeitgeist gives our current lifespans a level of inherent mystique and meaning they don't actually have.
Our bodies break down and age for seven clearly definable reasons: cancer, mutations in mitochondria, junk inside cells, junk outside cells, cell loss, cells losing the ability to divide, and extracellular crosslinks. The last of these was discovered in the 1970s, and not a single additional source of age-related damage has been identified in all of medicine since then, so it seems likely that this list is comprehensive. If we can "just" solve these problems, then we may find ourselves in a society where people only die from diseases, war, or accidents. If that could be achieved, the average lifespan could be 800-900 or more, depending on the frequency of war and disease.
"Immortality" is actually not all that rare in nature. A feature on turtles in the June 2002 issue of Discover magazine asked, "Can Turtles Live Forever?", explored the possibility that turtles do not age conventionally, but simply die due to accidents and disease that affect turtles at all ages equally. An influential monograph published in 2008 developed the theory behind this in detail. Caleb Finch, a professor in the neurobiology of aging at USC, has proposed rockfish, turtles, and bristlecone pine as candidate species that do not age.
It could be that we are not far from beginning to develop cures for the major causes of aging. Within a few years, the genomes for all the candidate species exhibiting very long lifespans will be sequenced, and we will gain insight into what gives these species such long lives. There may be certain proteins or metabolic tricks that they utilize to stave off age-related decline. If these are identified, they could lead to drugs or other therapies that radically extend human lifespans. Not all age-related damage needs to be repaired for organisms to live indefinitely -- damage must simply be repaired at a faster rate than it accumulates. Thus, people living in a society where average lifespan is extended by more than one year per year would enjoy indefinite lifespans, even though no "elixir of immortality" or any such thing had been developed. Some of us alive today might live to enjoy such a society.
For more scientific detail on this, see an article I wrote in 2009.
If we had all the space, health, water, and lifespan in the world, what would we be missing? Intelligence. Not just intelligence as in book smarts, but intelligence in the more sublime sense of understanding each other and the world around us on a visceral level. "Compassion" is a sub-category of the kind of intelligence I am talking about.
We tend to think of "intelligence" as running on a scale from village idiot to Einstein, but in reality, all of humanity is just a little dot on a huge scale of intelligence ranging from worms to posthuman superintelligences. The fact that our species found itself at this particular level of intelligence is just a cosmic accident. If our planet were a more dangerous place, humans would have been forced to evolve higher levels of intelligence just to cope with the perils of leaving the forest. Why can we store only 4-9 items in working memory and not 27-30? The answer is that we live on an arbitrary planet and an arbitrary level of intelligence was reached by humanity which enabled us to build a civilization.
Why can't we reprogram our own brains? Why didn't we launch the Industrial Revolution 100,000 years ago instead of 350 years ago? Why don't we immediately "get" complex concepts? We contrive mystical-sounding reasons for explaining away our characteristic level of intelligence as a species, and rarely even think about it because everyone in the species has the same limitations.
These limitations need not last forever. Imagine being able to perceive 50-dimensional objects, or colors in the infrared and ultraviolet ranges. Imagine being able to appreciate the subtle connections between millions of different domains of art or science rather than a few dozen. In principle, all of this could be possible. We'd have to augment our brains somehow, possibly with brain-computer interfaces, or maybe through more organic approaches. This is a line of research that is already in progress, and interesting results are being achieved every year.
Although the concept of brain-computer interfaces makes some of us squirm, the brain-computer interfaces of the future would have to be non-invasive and safe to be practical at all. To interface with millions of micron-sized neurons, a system would have to be delicate and sophisticated. It may be possible to coax the natural gene expression networks in the brain to produce more neurons or configure them in better arrangements. What nature gave us is not necessarily the most ideal brain or mind -- just what was practical for it at the time. We should regard the intellect of Homo sapiens as a good first draft -- but improvements on that draft are inevitable.
People alive today are different than the generations that come before us -- we have greater expectations of the world and reality itself. Instead of merely surviving, we strive towards a higher cosmic purpose rooted in science and logic instead of superstition and dogma. Science and technology are giving us the tools to create a paradise on Earth. We can use them for that, or use them to blow each other to smithereens. The choice is ours.
In a major breakthrough for the field of molecular machines, Canadian chemists have created a self-assembling metallo-organic molecular wheel and axle. This is the first time scientists have proved that interlocked molecules can function inside solid materials. The lead author, a graduate student, said:
“Until now, this has only ever been done in solution,” explained Chemistry & Biochemistry PhD student Nick Vukotic, lead author on a front page article recently published in the June issue of the journal Nature Chemistry [abstract]. “We’re the first ones to put this into a solid state material.”
A molecular wheel and axle in a solid state material is proof of concept for simple solid state molecular machines. A wheel can in principle be developed into more sophisticated solid state molecular machines, such as power-transfer rods and other kinetic frameworks or elements in a solid state molecular computer. The predictability of the solid state environment relative to the environment of a solution is crucial for developing predictable molecular machine systems, and makes it easier to apply certain general principles of macroscale engineering to nanoscale systems.
With relatively little progress in molecular machinery over the past decade, this is a welcome advance for nanotech enthusiasts.
[...] the examples of biological nanotechnology and the success of work on DNA nanotechnology by Seeman and others tells us nothing about whether MNT is possible, since the operating principles of soft and wet nanotechnology are quite different to the proposals of MNT.
Now, the existence proof of a solid state molecular machine provides new evidence about the relative plausibility of complex molecular machine systems.
Via Foresight Institute.
Here's a writeup.
Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:
-What is the Singularity?
-Is there a substantial chance we will significantly enhance human intelligence by 2050?
-Is there a substantial chance we will create human-level AI before 2050?
-If human-level AI is created, is there a good chance vastly superhuman AI will follow via an "intelligence explosion"?
-Is acceleration of technological trends required for a Singularity?
- Moore's Law (hardware trajectories), AI research progressing faster?
-What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??)
-Does AI need to be conscious or have human like "intentionality" in order to achieve a Singularity?
-What are the potential benefits and risks of the Singularity?
New paper on superintelligence by Nick Bostrom:
This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.
Last month in New York I had the pleasure to talk personally with the creator of Watson, Dr. David Ferrucci. I found him amicable and his answers to my questions on Watson very direct and informative. So, I have nothing against IBM in general. I love IBM's computers. Several of my past desktops and laptops have been IBM computers. The first modern computer I had was an IBM Aptiva.
However, there is a constant thread of articles related to claims being reported that IBM has "completely simulate(d)" "the brain of a mouse (512 processors), rat (2,048) and cat (24,576)", which was revived in force this last weekend. This is entirely false. IBM has not simulated the brain of a mouse, rat, or cat. Experiments have just recently been pursued to even simulate the 302-neuron brain of a flatworm, for which a wiring diagram exists. Instead, IBM has made "mouse-SIZED" neural simulations, "rat-SIZED" neural simulations, and "cat-SIZED" neural simulations, given certain assumptions about the computational power of mammalian brains. The arrangements between neurons being simulated bear little relation to the actual wiring diagram of neurons in these animals, which are not known. Given the tools we currently have, like ATLUM, it would take tens of thousands of years to determine the full connectomes of mice, rats, or cats.
I can never tell if it is the reporters who are being ridiculous, or IBM is deliberately misleading the public. However, I think IBM should issue a press release that clarifies the situation. Directly quoting Scientific American:
IBM describes the work in an intriguing paper (pdf) that compares various animal simulations done by its cognitive computing research group in Almaden, Calif. The group has managed to completely simulate the brain of a mouse (512 processors), rat (2,048) and cat (24,576).
The paper they cite is the same damn paper from 2009, "The Cat is Out of the Bag", which I immediately reacted to negatively within days of its publication. Since then, I've been watching as this false meme, which has yet to be directly repudiated by an IBM representative, makes its way through the media, which doesn't know any better.
Now, IBM is allegedly claiming that they simulated 4.5% of the (processes?) of the human brain, or at least hundreds of media sources are reporting it. All the media sources seem to just be linking the two-year old paper "The Cat is Out of the Bag", so I'm not sure if there was a recent announcement or it just took the media two years to pick up the story.
Again, it's impossible that IBM could simulate 4.5% of the human brain, because we (human civilization) don't have 4.5% of the wiring diagram of the human brain to use as raw data to build a simulation. We don't even have 0.1% of the wiring diagram of the human brain, I'd estimate, but you'd have to ask a computational neuroscientist (not one from IBM) to get a more informed guess.
We have the wiring diagram of the 302 neurons in the flatworm brain. That's about it.
The vast majority of Reddit commenters are clueless and missing the obvious error. Even this seemingly educated comment misses the point that there is NO WIRING DIAGRAM for the parts of the brain IBM allegedly simulated. Even this "best of class" comment seems to take the reporting at face value, as if 4.5% of the human brain had been simulated, and criticizes neuron models instead of the "elephant in the room" that I've explained.
Reddit commenters fail for being fooled, the media fails for reporting a false story, and IBM fails for not issuing a clarification. In many cases IBM seems to actively encourage the misconception that a full feline connectome has been simulated.
My prediction is that AGI will be invented and we will have a full-blown Singularity before a complete cat connectome (much less human connectome) is created.
This whole issue is important because the public is already confused about computational neuroscience enough as it is. I see computational neuroscience as very important, and it's important that the public -- and scientists, who despite their alleged higher level of thinking, frequently pull their beliefs from popular articles like everyone else -- know what is and hasn't been accomplished in the field.
For a nice article on connectomics and what has been accomplished so far, see this article from Microsoft Research. It correctly highlights ATLUM as the only technology that is precise enough to get slices that can be imagined in sufficient detail to build a connectome. ATLUM, by the way, was invented by a transhumanist, Ken Hayworth. (Why do people say that transhumanists don't contribute to science?)
Here's yet another article.
Although physical enhancement is what most people associate with transhumanism, it's not particularly interesting. A man with tentacles and wings who can fly and breathe underwater is still just some dude. Humans are primitive beings, with conspicuously primitive minds -- we just recently evolved from un-intelligent apes that used the same stone tools for millions of years.
Everything truly exciting about the transhumanist project lies in the mental realm. Only through opening up and intervening in the brain can we really change ourselves and the way the world works. Anything else is just the surface.
What approaches can we take to cognitive enhancement?
First, take brain surgery. It is extremely unlikely that cognitive enhancement will be conducted through conventional brain surgery as is practiced today. These procedures are inherently risky and only conducted under necessary circumstances, when the challenges of surgery outweigh the huge cost, substantial risk, and long recovery time of the procedures.
More subtle than brain surgery is optogenetics, regarded by some as the scientific breakthrough of the last decade. Optogenetics allows researchers to control the precise activation of neurons through the introduction of light-sensitive genes to animal brain tissue.
Optogenetics is unlikely to be applied to humans before 2030-2040, for two reasons. The first is that it involves the introduction of foreign genes into human brain tissue, and gene therapy is in its infancy -- treatments derived from gene therapy are extremely rare and highly experimental. People have been killed by gene therapy gone awry. When gene therapy research moves in the direction of human enhancement, a massive backlash seems plausible. It may be banned entirely for enhancement purposes.
At the very least, the short-lived nature of gene therapy and problems with viral vectors ensure that gene therapy will stay experimental until entirely new vectors are developed. Chromallocytes are the ideal gene delivery vector, but those are quite far off. Is there something between current vectors and chromallocytes that produces safe, predictable gene therapy results? That is a great big question mark. What is needed is not one or two breakthroughs, but a long series of many breakthroughs. I challenge readers to find anyone in biotech who would bet that gene therapy will be made safe, predictable, and approved for use in humans within 10 years, 20 years, or 30. Developing new basic capabilities in biotech is a long, drawn out process.
The second reason optogenetics will not bear fruit for cognitive enhancement before 2030-2040 is that it requires slicing off part of the scalp and mounting fiber optics directly on the skull. This is all well and good for animals, which we torment with abandon, but it seems unlikely to be popular among the Homo sapiens crowd. Mature regenerative medicine would be necessary to heal tissue damage from this procedure.
According to Ray Kurzweil's scenario, "nanobots" will be developed during the late 2020s which will be injected into the human body by the trillions, where they can link up with neurons and augment the brain from the inside.
However, given the near complete lack of progress towards molecular nanotechnology since Eric Drexler wrote Engines of Creation in 1986, I find this hard to believe. Nanobots require nanofactories, nanofactories require assemblers, and assemblers would be highly complex aggregates of millions of molecules that themselves would need to be manufactured to atomic precision. Today, all objects manufactured to molecular precision have negligible complexity. The imaging tools that exist today -- and for the foreseeable future -- are far too imprecise to allow for troubleshooting molecular systems of non-negligible size and complexity that refuse to behave as intended. The more precise the imaging method, the more energy is delivered to the molecular structure, and the more likely it is to be blown into a million little pieces.
It is difficult to understate how far we are from developing autonomous nanobots with the ability to perform complex tasks in a living human body. There is no reason to expect a smooth path from today's autonomous MEMS (micro-electro-mechanical systems) to the "nanobots" of futurist anticipation. Autonomous MEMS are early in their infancy. Assemblers are probably a necessary prerequisite to miniature robotics with the power to enhance human cognition. No one has designed anything close to an assembler, and if progress continues as it has for the last 25 years, it will be many decades before one is developed.
So, that is three technologies that I have argued will not be applied to cognitive enhancement in the foreseeable future -- brain surgery, optogenetics, and nanobots.
To me, transhumanism is a temporary movement -- transitional. Its role is to help individuals and society transition to living in a world where some portion of society technologically transforms their minds and bodies on both incremental and fundamental levels. This might range from getting a Google-connected neural implant to uploading one's consciousness into a virtual world. We transhumanists consider (cautious!) developments along these lines to be a good thing, and feel that the most pressing objections and concerns have been adequately addressed, including:
- What are the reasons to expect all these changes?
- Won't these developments take thousands or millions of years?
- What if it doesn't work?
- Won't it be boring to live forever in a perfect world?
- Will new technologies only benefit the rich and powerful?
- Aren't these future technologies very risky? Could they even cause our extinction?
- If these technologies are so dangerous, should they be banned?
- Shouldn't we concentrate on current problems...
- Will extended life worsen overpopulation problems?
- Will posthumans or superintelligent machines pose a threat to humans who aren't augmented?
- Isn't this tampering with nature?
- Isn't death part of the natural order of things?
The key is to see "Transhumanism" as a philosophy being just a temporary crutch, a tool for humanity to safely make the leap to transhumanity. Transhumanism is really only simplified humanism. Eventually, transhumanists hope to see a world where a wide variety of physical and cognitive modifications are available to everyone at reasonable cost, and their use is responsibly regulated, with freedom broadly prevailing over authoritarianism and control. When and if we arrive at that world in one piece, everyone will become de facto transhumanists, just as today, most people are de facto "industrialists" (benefit from and contribute to modern industrial society) and de facto "computerists".
It is also possible to imagine someone who doesn't anticipate taking advantage of transhumanist technologies being in favor of "transhumanism" nonetheless. That is, insofar as transhumanists competently and openly discuss the potential upsides and downsides of certain ambitious technological pathways such as extreme life extension and artificial intelligence, and make progress towards beneficial futures. Since widespread cognitive and physical enhancement is something that will soon effect everyone, including the unmodified, everyone has an obvious stake in the trajectory of enhancement technologies even if they do not personally use them.
Transhumanism can also be viewed as a discussion primarily among those who anticipate taking advantage of enhancement technologies before most others. As such, transhumanism forms a beacon that alerts the rest of society to likely changes and informs society about the kind of people who are most interested in human enhancement. Since certain "transhumanist" technologies, particularly intelligence enhancement, may prove to have decisive power over the course of history in the centuries ahead, it is important to examine the groups pursuing it and their motives.
For instance, DARPA is a hotbed of enhancement research. So, the role of the transhumanist is to alert society to that fact, ask them if they care, and if so, what they think about it. Is it a good thing that the development of human enhancement is being spearheaded by the United States military?
A transhumanist elicits opinions and perspectives of human enhancement from a variety of commentators who might not spontaneously offer their opinions otherwise. This includes critics of enhancement such as The New Atlantis, representing the "Judeo-Christian moral tradition".
Another purpose of the transhumanist is to be a concentrated source of facts and opinions on the concrete details of proposed enhancements, with facts and opinions clearly distinguished from each other. In theory, if the long-term dangers of a particular new technology or enhancement therapy plausibly exceed the benefits, transhumanists are responsible for discouraging the development of those technologies, instead developing alternative technologies that maximize benefits while minimizing risks. It would be easier for transhumanists to divert funding away from dangerous technologies, than, say bio-conservatives, because researchers under the influence of the extended transhumanist memeplex are the ones developing the crucial technologies and bio-conservatives are not.
A transhumanist is not just a blind technological cheerleader, enraptured by the supposed inevitability of a cornucopian future. A transhumanist should acknowledge the hazy and uncertain nature of the future, accepting beliefs only to the degree that the evidence merits, guided not by ideology but by flexible thinking, always welcoming criticism and views contrary to standard orthodoxies.
Say that the mind were non-physical, metaphysical, or whatever. Still, we know that physical brains give rise to minds, so mass-producing physical brains would still allow us to mass-produce non-physical minds. So, pure reductionism is not even necessary to carry the point I was making in the previous post.
The key discovery of human history is that minds are ultimately mechanical, operate according to physical principles, and that there is no fundamental distinction between the bits of organic matter that process thoughts and bits of organic matter elsewhere. This is called reductionism (in the second sense):
Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents. This can be said of objects, phenomena, explanations, theories, and meanings.
This discovery is interesting because it implies that 1) minds, previously thought to be mystical, can in principle be mass-produced in factories, 2) the human mind is just one possible type of mind and can theoretically be extended or permuted in millions of different ways.
Because of the substantial economic, creative, and moral value of intelligent minds relative to unthinking matter, it seems plausible that minds will be mass-produced when the capability exists to do so. The moment when that becomes possible is the most important moment in the history of the planet.
Since reductionism is true, minds can be described according to their non-mental constituent parts. We then see that the current situation, involving a lot of matter -- very little of it intelligent -- is an unstable equilibrium. When minds gain the ability to replicate and extend themselves rapidly, they will do so. It will be far easier to build and enhance minds than to destroy them, and numerous rewards for mindcrafting. Thus we can envision a saturation of local matter with intelligence.
Kurzweil mentions that we will "saturate the whole universe with our intelligence" -- that is the most interesting and important aspect of Singularitarian thinking. In the long term, we should think not of the creation of discrete entities that behave as agents similar to humans, but rather massive legions of spirit-like intelligence saturating all local matter.
This intelligence saturation effect is more important than any other technologies discussed in the transhumanist canon -- life extension, nanotechnology, physical enhancement, whatever. When these technologies truly bear fruit, it will be as a side effect of the intelligence explosion effect. Even if incremental progress is made prior to an intelligence explosion, in retrospect it will be seen as trivial relative to the progress made during the intelligence explosion itself.
I am fascinated by the possibility of using fullerenes to build eternal structures. If not eternal, extremely long-lasting. Fullerenes already exist today. See?
Above are aggregated diamond nanorods (ADNRs). The name "hyperdiamond" recently appeared to describe this material.
ADNRs, a type of fullerene (any molecule made entirely out of carbon), is the hardest and least compressible known material. Its bulk modulus, meaning resistance to compression, is 491 gigapascals (GPa), beating diamond which is only about 445 GPa. For comparison, the bulk modulus of steel is 160 GPa, glass is 30 GPa, and bone is just 15 GPa.
What else? This black stuff:
Look how dark it is. Something made out of that would be hard to see at night. Also, its melting point would be several thousand degrees.
The image above shows one of the longest nanotube forests ever created. The nanotubes are about 8 mm long.
An article I often point people to is "Why We Need Friendly AI", an older (2004) article by Eliezer Yudkowsky on the challenge of Friendly AI:
There are certain important things that evolution created. We don't know that evolution reliably creates these things, but we know that it happened at least once. A sense of fun, the love of beauty, taking joy in helping others, the ability to be swayed by moral argument, the wish to be better people. Call these things humaneness, the parts of ourselves that we treasure â€“ our ideals, our inclinations to alleviate suffering. If human is what we are, then humane is what we wish we were. Tribalism and hatred, prejudice and revenge, these things are also part of human nature. They are not humane, but they are human. They are a part of me; not by my choice, but by evolution's design, and the heritage of three and half billion years of lethal combat. Nature, bloody in tooth and claw, inscribed each base of my DNA. That is the tragedy of the human condition, that we are not what we wish we were. Humans were not designed by humans, humans were designed by evolution, which is a physical process devoid of conscience and compassion. And yet we have conscience. We have compassion. How did these things evolve? That's a real question with a real answer, which you can find in the field of evolutionary psychology. But for whatever reason, our humane tendencies are now a part of human nature.
We need to develop our conception of "good" to mean certain cognitive features built by evolution, rather than some metaphysical miasma floating around.