Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

24Jul/150

Wireless Optofluidic Systems for Programmable in Vivo Pharmacology and Optogenetics

This is the most important brain-computer interfacing breakthrough in a long time, possibly in several years:

Highlights

  • Neural probes with ultrathin, soft microfluidic channels coupled to micro-ILEDs
  • Optofluidic probes minimize tissue damage and are suitable for chronic implants
  • Wireless in vivo fluid delivery of viruses, peptides, and small-molecule agents
  • Combined wireless optogenetics with pharmacology for neural circuit dissection

Summary

In vivo pharmacology and optogenetics hold tremendous promise for dissection of neural circuits, cellular signaling, and manipulating neurophysiological systems in awake, behaving animals. Existing neural interface technologies, such as metal cannulas connected to external drug supplies for pharmacological infusions and tethered fiber optics for optogenetics, are not ideal for minimally invasive, untethered studies on freely behaving animals. Here, we introduce wireless optofluidic neural probes that combine ultrathin, soft microfluidic drug delivery with cellular-scale inorganic light-emitting diode (micro-ILED) arrays. These probes are orders of magnitude smaller than cannulas and allow wireless, programmed spatiotemporal control of fluid delivery and photostimulation. We demonstrate these devices in freely moving animals to modify gene expression, deliver peptide ligands, and provide concurrent photostimulation with antagonist drug delivery to manipulate mesoaccumbens reward-related behavior. The minimally invasive operation of these probes forecasts utility in other organ systems and species, with potential for broad application in biomedical science, engineering, and medicine.

Graphical Abstract:

optofluidicsWireless Optofluidic Systems for Programmable in Vivo Pharmacology and Optogenetics

Want to see this site active again? Donate here, you can personally make it happen. I bring you the latest news of transhumanist technologies with expert commentary.


You can also contribute via Bitcoin to 1PH4k2QAqZ1zC8BnpZuBuMczQhmYUXz4BV.

Filed under: BCI No Comments
4Jul/152

Like Our Facebook Page

FB banner
Filed under: meta 2 Comments
4Jul/150

Join Our Mailing List

mailing list

Join our mailing list for information about upcoming ebooks and compilations!

Transhumanism (abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities. Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as the ethics of developing and using such technologies. The most common thesis put forward is that human beings may eventually be able to transform themselves into beings with such greatly expanded abilities as to merit the label posthuman.

The contemporary meaning of the term transhumanism was foreshadowed by one of the first professors of futurology, FM-2030, who taught "new concepts of the human" at The New School in the 1960s, when he began to identify people who adopt technologies, lifestyles and worldviews "transitional" to posthumanity as "transhuman". This hypothesis would lay the intellectual groundwork for the British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990 and organizing in California an intelligentsia that has since grown into the worldwide transhumanist movement.

Subscribe to our mailing list

* indicates required




Filed under: meta No Comments
5Feb/1470

Scientists Image an Entire Roundworm Brain in Realtime

flatworm_scan

Microscopy is all about tradeoffs between the size of an imaged volume and spatial and temporal resolution. That is, until now. A new microscopy technique invented by researchers at the University of Vienna and MIT allows scientists to comprehensively image the neural firings of a living roundworm brain in realtime, vastly increasing the amount of data we can collect.

This is the first time a microscopy technique has been used to measure neural activity in an entire animal in realtime before. The principle behind its operation is similar to how the "bullet time" sequence in The Matrix was filmed, but with all the cameras returning data at the same time, and the sample being transparent.

In the filming of The Matrix, a series of cameras around Keanu Reeves captured his movements as he falls dramatically backwards, dodging bullets while the camera angle spins around him. A light field microscope is similar. It's like a normal optical microscope, but it consists of a series of microlenses which beam back optical data from different angles around the sample in realtime. Later on, a powerful computer uses a sophisticated algorithm to reconstruct a high-resolution 3D model. The light field microscope already existed before this; the breakthrough was the reconstruction algorithm.

Prior to this, 3D imaging approaches were limited by the amount of time it takes for microscopes sophisticated enough to capture an entire roundworm at once to scan. Usually, this would be about ten times a second. The new approach operates at 50 Hz, or 50 times a second. This is fast enough to pick up all the nuances of neural activity in the densely packed flatworm brain. The axial spatial precision is up to 1.4 microns.

The light field microscope is relatively affordable, and the algorithm will now be used in behavioral studies of flatworms. This will allow researchers to expose the flatworms to precise stimuli and record their exact neural responses. It brings us closer to creating an exhaustive simulation, or algorithm representing the roundworm itself. The paper calls it, "tools for non-invasive interrogation of neuronal circuits with high spatio-temporal resolution."

A longer-range objective will be to create an imaging system with such a wide field of view that it can observe the flatworm in a free-moving environment, rather than in just one place, as it is now. With the right improvements, the same principle could even be used to observe the neural activity of a population of interacting flatworms. The technique could also be used to observe other small, transparent organisms, such as zebrafish larvae.

A Stanford project previously created a high-resolution light field microscope for imaging the neural activity of an entire small organism, but it did not have a high enough frame rate to observe all the activity in realtime. The online updates for this project ended in 2008. Now, thanks to MIT and University of Vienna researchers, their original vision has been achieved.

Hopefully, these principles will be extended to observe larger and more complicated organisms. Pretty soon we could be looking at the cognitive algorithms of small animals in a much clearer way, and elucidating key details on the fundamentals of cognition. The time of full-brain dynamic circuit-mapping of neural processes is nigh!

Filed under: brain, technology 70 Comments
4Feb/1475

Sky Whale

sky_whale

A remarkable new aerospace design by Oscar Viñals is what he calls the "greenest aircraft imaginable." The design, called the Sky Whale, has a 88-meter (289 ft) wingspan and room for 755 passengers. It looks like a fusion between a plane and an airship, and is a full three stories tall.

The design makes use of the most modern materials, such as ceramics and carbon fiber composites, and is designed to reduce drag and energy expenditure as much as possible. An average flight from New York to San Francisco burns fuel equivalent to 2 to 3 tons of carbon dioxide per person, about the same footprint as six weeks of normal activity. This makes reducing the environmental impact of flying among the highest priority for those looking to lessen their carbon footprint. The Sky Whale may fit the ticket.

Everything in this plane has been redesigned from the ground up. The Sky Whale features a double fuselage, covered with small solar cells which contribute energy to the hybrid-electric engines and further lessen the environmental impact. It uses fiber-optic cabling and has self-repairing skin, though the designer hasn't explained how the latter would work.

Engines on the craft have the ability to tilt 45 degrees, allowing them to point straight downwards and allow takeoff on a much shorter runway. This would open up entirely new airports to international travel and leisure trips.

Active sensors cover the plane's structure, which uses an active air flow control system to minimize fuel use and make the plane eight times quieter than current standards. The huge plane has a blended wing body, meaning the fuselage itself is shaped like an airfoil.

The plane is designed with future battery capacity in mind, which will allow a unique hybrid turbo-electric propulsion system. In the unlikely event of an accident, the wings are designed to break off the plane, increasing safety for the passengers.

The Sky Whale isn't the only futuristic airplane design which blows the mind. In 2011, Airbus showed off a transparent plane concept which it thinks can be built by 2050.

Filed under: aerospace 75 Comments
4Feb/1450

Japan Researchers Make Embryonic Stem Cells Without Embryo in Major Discovery

stemcells

Researchers in Kobe, Japan and Boston have made the biggest breakthrough in stem cells yet, producing "embryonic-like" stem cells from mice by exposing differentiated cells to the stress of an acid bath. Previous methods of producing embryonic stem cells required complex genetic engineering or tedious cell sorting. This new technique simply involves bathing blood cells in a weakly acidic solution for half an hour.

The result was so surprisingly that at researcher who discovered it, the young Haruko Obokata at the Riken Institute for Developmental Biology in Kobe, didn't believe it at first. Neither did her colleagues. “I was really surprised the first time I saw [the stem cells]… Everyone said it was an artifact – there were some really hard days,” Dr. Obokata said. The new cells have been dubbed STAP cells by the researchers.

Proposals involving embryonic stem cells are one of the basic building blocks of the field of regenerative medicine. The cells made by Dr. Obakata were shown to be capable of differentiating into dozens of specialized cells, from cardiac-muscle cells to nerve cells. Though the results were obtained with mice, it experiments with human cells are already underway and may already be successful.

A simple and cheap process to produce embryo-like stem cells from blood cells rather than human embryos leapfrogs most of the ethical qualms which made these cells such a focus of controversy in the early 2000s. Prominent Republicans such as John McCain have already come forward in favor of stem cell research.

This stem cell breakthrough is so huge because of its ease. The process is so simple that it can be carried out in a lab without any special knowledge or equipment. It seems likely that the process will be duplicated in DIY garage bio-labs across the country. With the technique becoming so easy, regulation or ethical restriction becomes much more difficult, if not completely impossible.

Experts foresee embryonic stem cells being used to manufacture replacement organs and tissue for use in regenerative medicine therapy. Livers have already been grown from mouse stem cells. Scientists have already made lab-grown tear ducts, windpipes, and arteries. Dr. Obokata said that finding will open possibilities in "the study of cell senescence [aging] and cancer as well. "

Imagine a world where people can be saved from what are currently fatal heart attacks by receiving a transplanted heart grown from their own stem cells. With this game-changing advance, that world might not be far off.

Filed under: biology 50 Comments
21Jan/143

Google Introduces New Smart Contact Lens

techcrunch

Is Google working towards a heads-up-display built into a contact lens? It sure looks like they're heading in that direction, with a contact lens that measures blood glucose level in tears using a tiny censor.

The initial focus is on helping people with diabetes. Diabetics have to measure their blood glucose levels several times a day, which usually involves pricking their finger and drawing blood, a painful routine. By using a contact lens that measures their blood glucose and gives them the heads-up, they can avoid the chore.

Google engineers described the electronics in the contact lens as so small that they look like "bits of glitter," along with an antenna "thinner than a human hair". Though the contact lens is currently just a prototype, the engineers are "exploring integrating tiny LED lights that could light up to indicate that glucose levels have crossed above or below certain thresholds."

The smart lens consists of sensors sandwiched between two soft layers. A pinhole in the lens allows the tear fluid to make contact with the sensor. Besides the censor, the lens contains a capacitor, a controller, and a miniscule RFID chip which beams it power from external devices.

Measuring blood glucose levels must be done frequently for diabetics because they fluctuate so often. "Glucose levels change frequently with normal activity like exercising or eating or even sweating," say the project co-founders Brian Otis and Babak Parviz. "Sudden spikes or precipitous drops are dangerous and not uncommon, requiring round-the-clock monitoring."

This new contact lens has to overcome novel challenges, such as strict safeguards against the lens overheating or being hackable from the outside. For the 25.8 million Americans living with diabetes, getting accurate readings is a matter of life and death.

Babak Parvitz, one of the leaders of the project, was one of the first people in the world to work on smart contact lenses. Parvitz was formerly a professor at the University of Washington, where he collaborated with Microsoft Research on a similar lens with was unveiled in 2011. The project had no follow-up, however.

Various smart contact lenses, such as Sensimed Triggerfish, already exist in Europe, but they are not cleared by the FDA for sales in the United States. A group at Sweden's Malmo University has developed a contact lens that runs on a fuel cell powered by tears. The fuel cell uses a small amount of ascorbate in tears to generate electricity, similar to the way in which a lemon can power a light bulb if you stick wires into it.

In The Age of Spiritual Machines (1999), futurist Ray Kurzweil, who now works at Google, predicted that by the year 2019 we would have smart contact lenses which include retinal displays that project virtual reality images onto the eye.  With Google's latest announcement, it appears that we are moving closer to this future.

 

Filed under: technology 3 Comments
26Nov/134

Solving the Deficit Crisis with Life Extension

Hourglass_

The United States government is over $17 trillion in debt. That is over $56,641 of debt for every man, woman, and child in the country.

In the past four years alone, debt has skyrocketed from 75% to more than 105% of GDP:

Debt to GDP Q1 2013

It's not just the size of the debt itself, but the pace at which it is increasing. In fiscal 2013, interest payments on the debt totaled $222.75 billion, or 6% of all government spending. Some money funds have stopped buying securities due to fears of a US default, demonstrating to us that the process of issuing securities for debt cannot continue forever. The more debt our government builds up, the harder it is to keep borrowing.

The source of much of this growing debt is spending on Social Security and Medicare. If we are going to pay for these programs in the long run, we need a new strategy. According to New York Times blogger Nate Silver:

It’s one of the most fundamental political questions of our time: What’s driving the growth in government spending? And it has a relatively straightforward answer: first and foremost, spending on health care through Medicare and Medicaid, and other major social insurance and entitlement programs.

Here's the projected increase in Medicare spending through 2050:

medicare-spending-deficits-680

Besides the skyrocketing of government debt, we have a more fundamental economic challenge; the number of the old is rapidly growing in relation to the young. In Japan, the average number of children per family is only 1.3 (the replacement rate for zero population growth is 2.1), and one in five Japanese are seniors. By 2050, that is projected to be two in five. That means that every retired senior will need one person of working age paying taxes to fund their state benefits. In the United States, Social Security and Medicare benefits per senior is over $25,000 annually. The US, while not in as extreme of a situation as Japan, is not far behind, driven by the greying of the Baby Boomers.

This trajectory is not sustainable; the risks of massive structural debt have been calculated by the Congressional Budget Office, among others. Unless something fundamental is changed, interest payments will start making up a dangerous percentage of the federal budget. The government will need to print so much money to pay its bills that runaway inflation will become all but inevitable.

In his new book, The Ageless Generation, Alex Zhavoronkov, Ph.D, the director of the Biogerontology Research Foundation, proposes novel solutions to the crisis -- invest in healthspan-extending therapies and provide online education so seniors can continue to learn new skills and contribute to the economy well past the age of 65.

Some of the points made by Zhavoronkov in the book:

  • Much medical research provides little tangible benefit, and funds should be redirected to efforts that tangibly improve long-term human health, such as regenerative medicine. According to Zhavoronkov, less than 2% of the National Institutes' of Health budget over the last 20 years has gone to regenerative medicine -- the most promising field of anti-aging research. Given the the stated mission of the NIH is "to seek fundamental knowledge... to enhance health, lengthen life, and reduce the burdens of illness and disability," this is curious indeed. One study, which took ten years and millions of dollars, was to study the long-term effects of stress on elderly women who have had hip surgery. The conclusions? "Hip surgery leads to stress; stress leads to other illnesses; these illnesses in turn increase the risks of morbidity and mortality." Was it worth millions of dollars to figure out this obvious conclusion? In his book, Zhavoronkov argues that we can "no longer afford to spend lavishly on medical research for the sake of pure research," and should "place a higher priority on research that can potentially have a meaningful impact on overall health or health-care expenses."
  • Regenerative medicine is in a far more advanced state than most people realize, and if we put serious funding towards it, we can expect concrete dividends in terms of improving the health of seniors. To quote directly from the book: "Scientists have increased the life span of C. elegans--a type of worm--by ten times. Fruit flies--another common laboratory test subject--have lived four times longer than normal. Genetic therapies have allowed mice to reach the equivalent age of 160 in human years. This is particularly significant because mice as so genetically similar to humans. Hearts have been grown from a single cell and successfully transplanted into living, breathing animals. Humans have achieved a functional age that is 15 years younger than their biological age. Cancers have been cured in animals that are very similar to humans. The pieces of the technological and medical puzzles to extend longevity, and more to the point, health longevity, are now coming together. The remaining pieces, or at least enough pieces to make a dramatic change in the health of seniors, can be found within a decade--if there is sufficient research funding to make it happen."
  • Spending on Social Security and Medicare is out of control, and the projections make it look even worse. There is not enough revenue to tax our way out of this mess. The 2012 Social Security Trustees Report shows a surplus of $69 billion, but Table II.B1 of that report shows $102.7 in "income" from "Reimbursements from General Fund of the Treasury." The very same table shows $114.4 in bond interest as revenue, but new bonds have to be issued to pay that bond interest. So, contrary to the official report that the Social Security trust fund operated with a surplus of $69 billion in 2012, it actually ran at a deficit of $148.1 billion. Medicare is even worse, with real operating losses of $256.7 billion, when you subtract out revenue from the General Fund of the Treasury. Meanwhile, deficit spending is $135 billion per month. Even if we doubled effective tax rates for all workers making more than $113,000/year, it would only cover less than half of current deficit spending.
  • The only realistic way to lower Medicare expenditures and keep the economy afloat is to develop therapies that improve human health and change the culture of retirement. The concept of retirement at age 65 is a relatively recent invention, dating to the late 1950s. Prior to that, retirement was regarded as  "being put out to pasture"; to be excluded from productive society, made useless. "Retirement" was viewed negatively, not positively. Nowadays, due to improving knowledge regarding the causes of aging and the availability of better medical treatment, people have the choice to remain healthy for much longer than before, the question is whether we choose to exercise it. Some choose to become obese, others choose to continue work well into their golden years. Thanks to online education systems like Coursera, we're entering a world where people can choose to affordably continue learning and apply their job skills even at an later age. Lifelong learning is the way of the future.

This concludes my summary of Alex Zhavoronkov's arguments in The Ageless Generation. Many economists and public policy wonks are not aware of this third option, beyond draconian tax increases and the total disintegration of the social safety net. Given that serious life extension is within reach, we should not abandon the elderly by funding medical research based on extraneous fads that does not materially contribute to the well-being of real people. By focusing on aging research, namely regenerative medicine, we can extend healthspans and ensure that our citizens lead happy and productive lives well into their 80s and 90s.

More Information

The International Aging Research Portfolio by Alex Zhavoronkov

The Seven Deadly Causes of Aging, Aubrey de Grey's program to halt age-related decline

Regenerative Medicine on Wikipedia

Filed under: life extension 4 Comments
29Jul/1374

Nano-Harpoons for Silk Brain Interfaces

brainimplant

Silk brain implants, developed by Brian Litt at the University of Pennsylvania in 2010, are in the news again, this time as part of an NIH-funded study where they're being used to stop epilepsy in rats.

What are silk brain implants? They're silk membranes just 2.5 microns thick which support a network of flexible electrodes for neural interfacing. The membranes are designed to dissolve, leaving the electrodes behind.

A couple of technologies have been developed since 2010 which could be used with the silk membranes to make them more useful. The first are flexible microchips, just 30 microns thick, developed in Belgium and announced in October 2012. The second are nano-scale carbon nanotube neural harpoons, a millimeter long and nanometers wide, developed at Duke University and announced in July 2013.

The neural harpoons could hook up to neurons up to a millimeter in the brain, while the flexible microchips do localized processing. This approach would provide much clearer signals than electrodes which just sit on the surface of the brain. The harpoons, crafted through ion beam sharpening, are capable of snagging individual neurons and measuring their input. It would take some work to mechanically embed them in the silk brain implant and provide them with the capability of launching themselves into a specific location in neural tissue.

What sorts of applications could this sort of device be used for? It could teach us about the functions of specific neurons in live brains. That information, in turn, could be used to build interfaces that implant false memories, or real knowledge. There's even the possibility they could eventually be used to record or create dreams.

Before these carbon nanotube harpoons, brain implants were made of glass or silicon, substances which can break or damage tissue. The nanotubes are flexible, and too small to seriously damage tissue. They are fantastic conductors, sending high-fidelity electrical signals from one end to the other.

Experiments found that the nanotube harpoons were less likely to break off in cells than silicon probes, though there is still a risk of breakage. Insulation and the geometry of the device also need to be improved. Nonetheless, it's clear this is a significant step forward for brain-computer interfaces.

Filed under: cybernetics 74 Comments
29Jul/1382

The Colloidal Silver Crowd ‘Debunks’ Transhumanism

A new criticism of transhumanism recently made the rounds. It's not particularly insightful, but many transhumanists shared it on the social networks.

It turns out that the article is more of a criticism of mind uploading in particular rather than transhumanism in general, though it does attack Ray Kurzweil near the beginning.

The context: NaturalNews.com, where the article was published, is ground zero for health crankery on the Internet. Their Facebook page has 378K likes. Colloidal silver, fluoride paranoia, 'cancer cures', you name it. Visit the website to see what I mean.

Writers at NaturalNews see Kurzweil's vision as directly threatening to their worldview and business. Kurzweil argues we will all become nearly-immortal cyborgs by the late 2040s, through the progress of medicine and bionics. NaturalNews, on the other hand, advocates for taking natural substances to extend life and cure ills. The approaches aren't mutually exclusive, though NaturalNews seems to think they are.

Both sides are mistaken. We will not become immortal cyborgs by the 2040s.  Sometime this century it will be possible to build extremely durable bodies that do not age, but 2045 is premature. The line of thinking that foresaw major changes by that time, which was developed on the Extropy mailing list in the late 90s and was directly borrowed by Kurzweil for his books, has since been discredited. Simply put, progress towards molecular assemblers was much slower than we thought it would be. Other than that, most of Kurzweil's predictions for 2009 were clear misses, and it will seems unlikely they will be fulfilled before the early 2020s. His entire timetable is a decade or more early, with predictions for 2029 even more over-optimistic than his predictions for 2019 or 2009.

However, this doesn't rule out that the more extreme futures envisioned by transhumanists won't come to pass. It's just extremely evident that these changes won't occur 30 years from now, exponential change or not. This is probably difficult for Kurzweil and other boomer futurists to accept, because it strongly implies they will not personally live up to the Singularity. That only leaves cryonics, which remarkably few boomer transhumanists are signed up for. Many are having psychological difficulty accepting what becomes more obvious every day; for them, it's cryonics or nothing.

The Argument

After introducing Kurzweil and transhumanism and dissing them, the article moves on to the core of its argument, which is directed against mind uploading rather than other features of transhumanism:

Let's examine the claims of the transhumanism cult leaders like Kurzweil. They are saying that by 2045, all the following technology will exist:

Technology #1) A way to "scan" your entire brain and record every neuron and holographic patterning that exists in your brain.

Technology #2) A way to build an equally complex computing system that has equivalent computational capabilities as your brain.

Technology #3) A way to COPY your brain scan into the computing system. This is called "uploading" your brain to the machine.Once these three technologies exist, we are promised, we can all transfer our minds to computer systems and experience "digital immortality!" But wait a second. Something's already missing here, do you see it? In this plan there is no mechanism to transfer your consciousness to the machine. So even if all three of these technologies are adequately developed (which is possible, by the way), they still don't provide a way to merge your mind with a machine.

If you have #1 and #2, then #3 is trivial. If you can scan the brain in totality and record it as a computer file, it's already copied. Copying it to a computer and running it as a program would be the same as transferring any program of similar size.

The writer is right to be skeptical that Technology #1 will exist before 2045. In all probability, it won't. But it might. This would have to be a highly invasive form of scanning. Basically, a small swarm of microbots in the brain. (When futurists say 'nanobots,' what they actually mean is microbots, meaning robots on the micron scale. The nanoscale is too small to construct very useful robots except for molecular assemblers.) A collection of microbots with the volume of a pill would be enough. They could be flushed from the body after scanning were complete, if desired.

There is no conceivable non-invasive scanning technology that could evaluate the exact positions of single neurons and synapses deep in the brain. Synapses are too small, and the brain is too opaque. Unless there is some huge, unforeseen breakthrough that bends our current understanding of physical law, scanning the brain will require robots to go in there and do the surveying from the inside. These will take a number of decades to develop. Not easy, but the laws of physics clearly don't forbid it.

Compared to Technology #1, Technology #2 is relatively trivial. We have already built supercomputers that exceed the lowball estimates of human brain processing power, and will have computers that exceed the higher realistic estimates within a couple years.

When considering "computers as powerful as the human brain," TV show writers and other armchair sci-fi visionaries feel the need to invent fantastical new forms of computing to "bridge the gap," but this is not necessary. There is nothing magical about computation in biological neurons; in fact, it is well understood. This surprises much of the public, which is about 40 years (or more) behind the state of the art of cognitive science. "The public" includes most sci-fi authors, as well.

Estimates of the computing power of the human brain rely on neuron count, firing speed, and guesses at the information-processing-per-neuron. There are about a hundred billion neurons in the brain, and they fire at most 200 times a second. At any particular time, most neurons in the brain aren't firing at all. Neurons are not just mere logic gates; they perform operations such as multiplication. Even being generous, however, it's difficult to assign each neuron more than a few operations per spiking event, and the average is probably far less. While it's widely understood that smaller neural components like dendrites modulate neural processing, no one seriously thinks they perform independent computations.

Say that each neuron fires 200 times every second, and each spiking event represents a single logic operation. Assuming every neuron in the brain is constantly firing and each individual operation is computationally relevant to the overall picture (highly doubtful), the computational capacity of the human brain is about 100 billion times 200, or roughly 20 trillion operations per second. It can't be much more than that, but it may be much less. In comparison, the world's fastest computer operates at 33,860 trillion calculations per second (33.8 petaflops), which is 1,693 times greater.

IBM researcher Dharmenda Modha pegs the human brain's computational capacity at 36.8 petaflops. Unfortunately, Dr. Modha has no scientific credibility whatsoever, but even assuming he is correct, that puts today's fastest supercomputer at just a few percent short of the magic number.

The point is that we're already basically there. We have computers "as complex" as the human brain. Sure, they consume many orders of magnitude more power than the human brain does, and take up a whole room rather than a skull, but we're getting there. If we had a full readout of a human brain, there are computers that could run it today.

Do our cold, soulless supercomputers have the magic juju needed to simulate the ultra-special organic features of our glorious human brains? Yes. A serial computer, which relies on quickly performing computations in a sequence, can simulate a parallel computer, like the brain, which relies on a massive bank of slower nodes. Conversely, a parallel computer would never be able to simulate a serial computer of equivalent processing power, because its computations are spread out across so many disconnected nodes.

To visualize why, imagine a room full of men with calculators trying to cooperate to keep up with a supercomputer. In serial computing, the operations happen in a sequence; computation A gives result B, which goes to computation C, leading to result D, which goes to computation E, and so on. This happens billions of times a second. The concept is data dependency. The brain, however, has a billion nodes which only conduct a couple hundred operations a second. The brain is like the room full of men with calculators. They can perform many computations, but they can't pass them between one another fast enough to keep up with the serial supercomputer. The supercomputer can simulate the men with their calculators, but they cannot simulate it. It's too fast for them, and is much better at handling data dependencies.

Imagine the room full of men with calculators trying to perform computations that involve data dependencies. They might depend on a calculation happening across the room, and need to walk over to get the result, before they can begin their own computation. As a result, most of them would spend their time waiting. This is similar to the way in which most of the neurons in the human brain aren't firing all the time. In fact, when too many neurons fire at once, that's called a seizure. A serial computer, in contrast, can go full bore all the time, because it's designed to.

The fundamental chemistry and principles of operation of human neurons is indistinguishable from flatworm neurons. We just have a lot more of them, in more complex arrangements. So if a mystical new form of computing would be necessary to create computers "as complex as" the human brain, the same mystical computing would be necessary to create computers as complex as a flatworm. Speaking of which, there is an international effort to exhaustively simulate every aspect of the flatworm C. elegans. If this effort is successful (and there's every indication it will be), it will be a proof of concept that the human brain can be simulated in a computer. The basic computation is the same, it's just that the human brain is far, far larger.

Consciousness

The standard view of consciousness today is similar to the view of life before the 19th century, or fire in the 17th century: it runs on magic. People don't get that every phenomenon starts off with a mystical explanation, until it is eventually reduced to being explained in terms of reductive principles. Every single mysterious aspect of nature has gone through the same cycle.

The modern scientific view of consciousness is based on causal functionalism: that consciousness emerges as a sum of the interactions in the brain. That means a machine with the same computations as your brain would have the same consciousness you do. Alternate explanations do not hold up to Occam's razor; they have to postulate that consciousness is something unique about the proteins or neurons the brain runs on, rather than the computations themselves. These alternate explanations would have to predict that someone with a computer implant to replace a brain module would partially lose their consciousness. Anyone who works with neural prostheses intuitively knows that this idea would be absurd.

If someone with 10% of their brain replaced with a computer that performed the same functions claimed to be conscious, then it's extremely likely that someone with 20%, 30%, 40%, or 50% would claim the same. That goes all the way up to 100%. Philosopher David Chalmers calls this the "fading qualia" thought experiment.

So, there doesn't need to be a magical process to port a human consciousness into a computer. The port could be completely natural, because the mind is a series of computations, not a piece of meat. It just happens to run on a piece of meat. Dualists throughout history actually had a point; although the natural world is a unified reality, the mind is better described as a computation running on the brain than as the brain itself.

Final Words

I understand the incredulity of the Natural News crowd regarding the technology of mind uploading. It does sound fantastic. Its plausibility does need to be corroborated by the exhaustive simulation of simpler organisms. It isn't right around the corner. However, it is a proposed technology that rests on plausible philosophical and technological foundations. There are even groups, like Global Future 2045, who are putting money and scientific expertise towards developing it as quickly as possible.

It's easy to ignore developments that seem more than five years away. But, day by day, the evidence accumulates that the long-term vision of mind uploading is possible, and its benefits -- virtuality, immortality, complete control over self and environment, cognitive upgrades -- are ours to lose.

Filed under: meta 82 Comments
6Nov/1241

Think Twice: A Response to Kevin Kelly on ‘Thinkism’

In late 2008, tech luminary Kevin Kelly, the founding executive editor of Wired magazine, published a critique of what he calls "thinkism" -- the idea of smarter-than-human Artificial Intelligences with accelerated thinking and acting speeds developing science, technology, civilization, and physical constructs at faster-than-human rates. The argument over "thinkism" is important to answering the question of whether Artificial Intelligence could quickly transform the world once it passes a certain threshold of intelligence, called the "intelligence explosion" scenario.

Kelly begins his blog post by stating that “thinkism doesn’t work", specifically meaning that he doesn't believe that a smarter-than-human Artificial Intelligence could rapidly develop infrastructure to transform the world.  After using the Wikipedia definition of the Singularity, Kelly writes that Vernor Vinge, Ray Kurzweil and others view the Singularity as deriving from smarter-than-human Artificial Intelligences (superintelligences) developing the skills to make themselves smarter, doing so at a rapid rate. Then, “technical problems are quickly solved, so that society’s overall progress makes it impossible for us to imagine what lies beyond the Singularity’s birth”, Kelly says. Specifically, he alludes to superintelligence developing the science to cure the effects of human aging faster than they accumulate, thereby giving us indefinite lifespans. The notion of the Singularity is roughly that the creation of superintelligence could lead to indefinite lifespans and post-scarcity abundance within a matter of years or even months, due to the vastly accelerated science and robotics that superintelligence could develop. Obviously, if this scenario is plausible, then it might be worth devoting more resources to developing human-friendly Artificial Intelligence than we are currently. A number of eminent scientists are beginning to take the scenario seriously, while Kelly stands out as an interesting critic.

Kelly does not dismiss the Singularity concept out of hand, saying "I agree with parts of that. There appears to be nothing in the composition of the universe, or our minds, that would prevent us from making a machine as smart as us, and probably (but not as surely) smarter than us." However, he then rejects the hypothesis, saying, "the major trouble with this scenario is a confusion between intelligence and work. The notion of an instant Singularity rests upon the misguided idea that intelligence alone can solve problems." Kelly quotes the Singularity Institute article, "Why Work Towards the Singularity", arguing it implies an "approach [where] one only has to think about problems smartly enough to solve them." Kelly calls this "thinkism".

Kelly brings up concrete examples, such as curing cancer and prolonging life, stating that these problems cannot be solved by “thinkism.” "No amount of thinkism will discover how the cell ages, or how telomeres fall off", Kelly writes. "No intelligence, no matter how super duper, can figure out how human body works simply by reading all the known scientific literature in the world and then contemplating it." He then highlights the necessity of experimentation in deriving new knowledge and working hypotheses, concluding that, "thinking about the potential data will not yield the correct data. Thinking is only part of science; maybe even a small part."

Part of Kelly's argument rests on the idea that there are fixed-rate external processes, such as the metabolism of a cell, which cannot be sped up to provide more experimental data than they would otherwise. He explains, that "there is no doubt that a super AI can accelerate the process of science, as even non-AI computation has already sped it up. But the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up." He also uses physics as an example, saying "If we want to know what happens to subatomic particles, we can't just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 smarter than they are now, without a Collider, they will know nothing new." Kelly acknowledges the potential of computer simulations but argues they are still constrained by fixed-rate external processes, noting, "Sure, we can make a computer simulation of an atom or cell (and will someday). We can speed up this simulations many factors, but the testing, vetting and proving of those models also has to take place in calendar time to match the rate of their targets."

Continuing his argument, Kelly writes: "To be useful artificial intelligences have to be embodied in the world, and that world will often set their pace of innovations. Thinkism is not enough. Without conducting experiements, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world's problems. There won't be instant discoveries the minute, hour, day or year a smarter-than-human AI appears. The rate of discovery will hopefully be significantly accelerated. Even better, a super AI will ask questions no human would ask. But, to take one example, it will require many generations of experiments on living organisms, not even to mention humans, before such a difficult achievement as immortality is gained."

Concluding, Kelly writes: "The Singularity is an illusion that will be constantly retreating -- always "near" but never arriving. We'll wonder why it never came after we got AI. Then one day in the future, we'll realize it already happened. The super AI came, and all the things we thought it would bring instantly -- personal nanotechnology, brain upgrades, immortality -- did not come. Instead other benefits accrued, which we did not anticipate, and took long to appreciate. Since we did not see them coming, we look back and say, yes, that was the Singularity."

This fascinating post of Kelly's raises many issues, the two most prominent being:

1) Given sensory data X, how difficult is it for agent Y to come to conclusion Z?
2) Can experimentation be accelerated past the human-familiar rate or not?

These will be addressed below.

Can We Just Think Our Way Through Problems?

There are many interesting examples in human history of situations where people "should" have realized something but didn't. For instance, the ancient Egyptians, Greeks, and Romans had all the necessary technology to manufacture hot air balloons, but apparently never thought of it. It wasn't until 1783 that the first historic hot-air balloon flew. It is possible that ancient civilizations did build hot-air balloons and left no archeological evidence of their remains. One hot air balloonist thinks the Nazca lines were viewed by prehistoric balloonists. My guess would be that the ancients might have been clever enough to manufacture hot air balloons, but probably not. The point is that they could have built them, but didn't.

Inoculation and vaccination are another relevant example. A text from 8th century India included a chapter on smallpox and mentioned methods of inoculating against the disease. Given that the value of inoculation was known in India c. 750 BC, it would seem that the modern age of vaccination should have arrived prior to 1796. Besides safe water, vaccines reduce mortality and increase population growth more than any other means. Aren't 2,550 years enough time to go from the basic principle of inoculation to the notion of systematic vaccination? It could be argued that the discovery of cell theory (1665) was a limiting factor; if cell theory had been introduced to 8th century Indians, perhaps they would have been able to develop vaccines and save the world from hundreds of millions of unnecessary deaths.

Lenses, which are no more than precisely curved pieces of glass, are fundamental to scientific instruments: the microscope and the telescope and are at least 2,700 years old; the Nimrud lens, discovered at the Assyrian palace of Nimrud in modern-day Iraq, demonstrates their antiquity. The discoverer of the lens noted that he had seen very small inscriptions on Assyrian artifacts that made him suspect that a lens was used to create them. There are numerous references to and evidence of lenses in antiquity. The Visby lenses found in a 11th to 12th century Viking town are sophisticated aspheric lenses with angular resolution of 25–30 µm. Even after lenses became widespread in 1280, it took microscopes almost 500 years to develop to the point of being able to discover cells. Given that lenses are as old as they are, why did it take so incredibly long for our ancestors to develop them to the point of being able to build microscopes and telescopes?

A final example that I will discuss regards complex gear mechanisms and analog computers in general. The Antikythera mechanism, dated to 100 BC, consists of about 30 precisely interlocked bronze gears designed to display the locations in the sky of the Sun, Moon, and the five planets known at the time. Why wasn't it until more than 1,400 years later that mechanisms of similar complexity were constructed? At the time, Greece was a developed civilization of about 4-5 million people. It could be that a civilization of sufficient size and stability to produce complex gear mechanisms did not come into existence until 1,400 years later. Perhaps a simple lack of ingenuity is to blame. The exact answer is unknown, but we do know that the mechanical basis for constructing bronze gears of similar quality existed for a long time, it just wasn't put into use.

It apparently takes a long time for humans to figure some things out. There are numerous historic examples where all the pieces of a puzzle were on the table, there was just no one who put them together. The perspective of "thinkism" suggests that if the right genius were alive at the right time, he or she would have put the pieces together and given civilization a major push forward. I believe that this is borne out by contrasting the historical record with what we know today.

Value of Information

It takes a certain amount of information to come to certain conclusions. There is a minimum amount of information required to identify an object, plan a winning strategy in a game, model someone's psychology, or design an artifact. The more intelligent or specialized the agent is, the less information it needs to reach the conclusion. Conclusions may be "good enough" rather than perfect, in other words, "ecologically rational".

An example is how good humans are at recognizing faces. The experimental data shows that we are fantastic at this; in one study, half of respondents correctly identified this image as being a portrait of Napoleon Bonaparte, even though it is only a mere 6×7 pixels.

MIT computational neuroscientist Pawan Sinha found that given 12 by 14 pixels worth of visual information, his experimental subjects could recognize 75-percent of the face images in a set accurately, where the set had a mix of faces and other objects. Sinha also programmed a computer to identify face images, with a high success rate. A New York Times article quotes Dr. Sinha: "These turn out to be very simple relationships, things like the eyes are always darker than the forehead, and the mouth is darker than the cheeks,” Dr. Sinha said. “If you put together about 12 of these relationships, you get a template that you can use to locate a face.” There are already algorithms that can identify faces from databases which only include a single picture of an individual.

These results are relevant because they are examples where humans or software programs are able to make correct judgments with extremely small amounts of information, less than we would intuitively think is necessary. The picture of Napoleon above can be specified by about 168 bits. Who would imagine that hundreds of people in an experimental study could uniquely identify a historic individual based on a photo containing only 168 bits of information? It shows that humans have cognitive algorithms that are highly effective and specialized at identifying such information. Perhaps we could make huge scientific breakthroughs if we had different cognitive algorithms specialized at engaging unfamiliar, but highly relevant data sets.

The same could apply to observations and conclusions of all sorts. The amount of information needed to make breakthroughs in science could be less than we think. We do know that new ways of looking at the world can make a tremendous difference in uncovering true beliefs. A civilization without science might exist for a long time without accumulating significant amounts of objective knowledge about biology or physics. For instance, the Platonic theory of classical elements persisted for thousands of years.

Then, science came along. In the century following the development of the Scientific Method by Francis Bacon in 1620, there was rapid progress in science and technology, fueled by this new worldview. By 1780, the Industrial Revolution was in full swing. If the Scientific Method had been invented and applied in ancient Greece, progress that would have seemed mind-boggling and impossible at the time, like the Industrial Revolution, could have potentially been achieved within a century or two. The Scientific Method increased the marginal usefulness of each new piece of knowledge humanity acquired, giving it a more logical and epistemologically productive framework than was accessible in the pre-scientific haze.

Could there be other organizing principles of effective thinking analogous to the Scientific Method that we're just missing today? It seems hard to rule it out, and quite plausible. The use of Bayesian principles in inference, which has led to breakthroughs in Artificial Intelligence, would be one candidate. Perhaps better thinkers could discover such principles more rapidly than we can, and make fundamental breakthroughs with less information than we would currently anticipate being necessary.

The Essence of Intelligence is Surprise

A key factor defining feats of intelligence or cleverness is surprise. Higher intelligence sees the solution no one else saw, looks past the surface of a problem to find the general principles and features that allow them to understand and resolve it. A classic, if cliché example is Albert Einstein deriving the principles of special relativity working as a patent clerk in Bern, Switzerland. His ideas were considered radically counterintuitive, but proved correct. The concept of the speed of light being constant for all observers regardless of their velocity had no precedent in Newtonian physics or common sense. It took a great mind to think about the universe in a completely new way.

Kelly rejects the notion of superintelligence leading to immortality when he says, "this super-super intelligence would be able to use advanced nanotechnology (which it had invented a few days before) to cure cancer, heart disease, and death itself in the few years before Ray had to die. If you can live long enough to see the Singularity, you'll live forever [...] The major trouble with this scenario is a confusion between intelligence and work." Kelly highlights "immortality" as being very difficult to achieve through intelligence and its fruits alone, but this understanding is relative. Medieval peasants would see rifles, freight trains, and atomic bombs as very difficult to achieve. Stone Age man would see bronze instruments as difficult to achieve, if he could imagine them at all. The impression of difficulty is relative to intelligence and the tools a civilization has. To very intelligent agents, a great deal of tasks might seem easy, including vast categories of tasks that less intelligent agents cannot even comprehend.

Would providing indefinite lifespans (biological immortality) to humans be extremely difficult, even for superintelligences? Instead of saying "yes" based on the evidence of our own imaginations, we must confess that we don't know. This doesn't mean that the probability is 50% -- it means we really don't know. We can come up with a tentative probability, say 10%, and iterate based on evidence that comes in. But to say that it will not happen with high confidence is impossible, because a lesser intelligence cannot place definite limits (outside of, perhaps, the laws of physics) on what a higher intelligence or more advanced civilization can achieve. To say that it will happen with high confidence is also impossible, because we lack the information.

The general point is that one of the hallmarks of great intelligence is surprise. The discovery of gunpowder must have been a surprise. The realization that the earth orbits the Sun and not vice versa was a surprise. The derivation of the laws of motion and their universal applicability was a surprise. The creation of the steam engine led to surprising results. The notion that we evolved from apes surprised and shocked many. The idea that life was not animated by a vital force but in fact operated according to the same rules of chemistry as everything else was certainly surprising. Mere human intelligence has surprised us time and time again with its results -- we should not be surprised to be surprised again by higher forms of intelligence, if and when they are built.

Accelerating Experimentation

One of Kelly's core arguments is that experimentation to derive new knowledge and the "testing, vetting and proving" of computer models will require "calendar time". However, it is possible to imagine ways in which the process of experimentation and empirical verification could be accelerated to faster-than-human-calendar speeds.

To start, consider variance in the performance of human scientists. There are historic examples of times where scientific and technological progress was very rapid. The most recent and perhaps striking example was during World War II. Within six years, the following technologies were invented: radar, jet aircraft, ballistic missiles, nuclear power and weapons, and general-purpose computers. So, despite fixed-rate external processes limiting the rate of experimentation, innovation was temporarily accelerated anyway. Intuitively, the rate of innovation was arguably three to four times greater than in a similar period before the war. Though the exact factor is subjective, few historians would disagree that rapid scientific innovation occurred during WWII.

Why was this? Several factors may be identified: 1) increased military spending on research, 2) more scientists due to better training connected to the war effort, 3) researchers working harder and with more motivation than they otherwise would, 4) second-order effects resulting from larger groups of brilliant people interacting with one another in a supportive environment, as in the Manhattan Project.

An advanced Artificial Intelligence could employ all these strategies to accelerate its own speed of research and development. It could 1) amass a large amount of resources in the form of physical and social capital, and spend them on research, 2) copy itself thousands or millions of times using available computers to ensure there are many researchers, 3) possess perfect patience, perpetual alertness, and accelerated thinking speed to work harder than human researchers can, and 4) benefit from second-order effects by utilizing electronic communication between its constituent researcher-agents. To the extent that accelerated innovation is possible with these strategies, an Artificial Intelligence could exploit them to the fullest degree possible.

Of course, experimentation is certainly necessary to make scientific progress -- many revolutions in science begin with peculiar phenomena that are artificially magnified with the aid of carefully designed experiments. For instance, the double-slit experiment in quantum mechanics emphasizes the wave-particle duality of light, a phenomenon not typically observed during everyday circumstances. Determining the details of how different chemicals intermingle to produce reaction products has required millions of experiments. Understanding biology has required many millions of experiments as well. Only strictly observational facts such as the cellular structure of life or the surface features of the Moon can be assessed through direct observation. To determine how metabolic processes actually work or what is underneath the surface of the moon requires experimentation, trial and error.

There are four concrete ways in which experimentation might be accelerated to speeds beyond the typical human level. These are conducting experiments faster, more efficiently, conducting them in parallel, and choosing the most useful experiments to begin with. Kelly argues that "the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up". But, this is not entirely clear. It should be possible to build chemical networks that simulate cellular processes operating more quickly than cellular metabolisms do. In addition, it is not clear that a comprehensive understanding of cells would be necessary to achieve biological immortality. Achieving indefinite biological lifespans could be more readily achieved by repairing cellular damage and chemical junk faster than it accumulates than constantly keeping all cells in a state of perpetual youth, which seems to be what Kelly is implying is necessary. In fact, it may be possible to develop therapies for repairing the damages of aging with our current biological knowledge. Since we aren't superintelligences, it is impossible to tell. But Kelly makes an error when he assumes that keeping all cells in a state of perpetual youth, or total understanding, is required for indefinite lifespans. This shows how even small differences in knowledge between humans can make an all-important difference in research targets and agendas. The difference in knowledge between humans and superintelligences will make the difference larger still.

Considering these factors highlights the earlier point that the perceived difficulty of a given advance, like biological immortality, is strongly influenced by the framing of the necessary prerequisites to achieve that advance, and the intelligence doing the evaluation. Kelly's framing of the problem is that massive amounts of biological experimentation would be necessary to derive the knowledge to repair the body faster than it breaks down. This may be the case, but it might not be. A higher intelligence might be able to achieve equivalent insights with ten experiments that a lesser intelligence would require a thousand experiments to uncover.

The rate of useful experimentation by superhuman intelligences will depend on factors such as 1) how much data is needed to make a given advance and 2) whether experiments be accelerated, simplified, or made massively parallel.

Research in biology, medicine, and chemistry have exploited highly parallel robotic systems for experiments. This field is called high-throughput screening (HTS). One paper describes a machine that simultaneously introduces 1,536 compounds to 1,536 assay plates, performing 1,536 chemical experiments at once in a completely automated fashion, determining 1,536 dose-response curves per cycle. Only 23 nanoliters of each compound is transferred. This highly miniaturized, highly parallel, high-density mode of experimentation has only begun to be exploited due to advances in robotics. If robotics could be manufactured on a massive scale more cheaply, one can imagine warehouses full of such machines conducting many hundreds of millions of experiments simultaneously.

Another method of accelerating experimentation would be to improve microscale manufacturing and to construct experiments using the minimum possible quantity of matter. For instance, instead of dropping weights off the Leaning Tower of Pisa, construct a microscale vacuum chamber and drop a cell-sized diamond grain in that chamber. Thousands of physics experiments could be conducted in the time it would require to conduct one experiment by the traditional method. With better sensors, you can conduct an experiment on ten cells that with inferior sensors would necessitate a million cells. More fine-grained control of matter can allow an agent to extract much more information from a smaller experiment that costs less and can be run faster and massively parallel. It is conceivable that an advanced Artificial Intelligence could come up with millions of hypotheses and test them all simultaneously in one small building.

Between-Species Comparisons

In his 1992 paper defining the Singularity, Vernor Vinge called the hypothetical post-Singularity world "a regime as radically different from our human past as we humans are from the lower animals". Kelly, meanwhile, said that for artificial intelligences to amass scientific knowledge and make breakthroughs (like biological immortality) would require detailed models, and the "testing, vetting and proving of those models" requires "calendar time". These models will "take years, or months, or at least days, to get results". Since the comparison between different species is sometimes seen as a model for plausible differences between humans and superintelligences, let's apply that model to the context of experiments that Kelly is referring to. Do humans create effects in the world faster than squirrels? Yes. Are humans qualitatively better at working towards biological immortality than squirrels? Yes. Do humans have a fundamentally superior understanding of the universe than squirrels do? It would be safe to say that we do.

The comparison with squirrels sounds absurd because concepts like biological immortality and "understanding the universe" are fuzzy at best from the perspective of a squirrel. Analogously, there may be stages in comprehension of reality that are fundamentally more advanced than our own and only accessible to higher intelligences. In this way, the "calendar time" of humans would have no more meaning to a superintelligence than "squirrel time" has relevance to human life. It's not so much a factor of time -- though higher intelligences can do much more in much less time -- but also the general category of thoughts which can be processed, objectives which can be imagined, and plans which can be achieved. The objectives and methods of a higher intelligence would be on a completely different level than those of a lower intelligence and are different in kind, not degree.

There are several reasons why it makes sense to assume that qualitatively smarter-than-human intelligence, that is, qualitative differences on the order of difference between humans and squirrels or greater, should be possible. The first reason concerns the relative speed of human neurons relative to artificial computing machinery. Modern computers operate at billions of serial operations per second. Human neurons operate at only a couple hundred serial operations per second. Since most acts of cognition must occur within one second to be evolutionarily useful, and must include redundancy and fault tolerance, the brain is constrained to problem solutions involving 100 serial steps or less. What about the universe of possible solutions to cognitive tasks that require more than 100 serial steps? If the computer you are using had to implement every meaningful operation in 100 serial steps, the vast majority of common algorithms used today would have to be thrown out. In the space of possible algorithms, it quickly becomes obvious that constraining a computer to 100 serial steps is an onerous limitation. Expanding this space by a factor of ten million seems likely to lead to significant qualitative improvements in intelligence.

The reason that qualitatively smarter-than-human intelligence is possible is about neurological hardware and software. There are relatively few hardware differences between humans and chimpanzee brains. The evidence actually supports the notion that primate brains are more distinct from non-primates than humans are from other primates, and that the human brain is merely a primate brain scaled up for a larger body and with an enlarged prefrontal cortex. One quantitative study of human vs. chimpanzee brain cells came to this conclusion:

Despite our ongoing efforts to understand biology under the light of evolution, we have often resorted to considering the human brain as an outlier to justify our cognitive abilities, as if evolution applied to all species except humans. Remarkably, all the characteristics that appeared to single out the human brain as extraordinary, a point off the curve, can now, in retrospect, be understood as stemming from comparisons against body size with the underlying assumptions that all brains are uniformly scaled-up or scaled-down versions of each other and that brain size (and, hence, number of neurons) is tightly coupled to body size. Our recently acquired quantitative data on the cellular composition of the human brain and its comparison to other brains, both primate and nonprimate, strongly indicate that we need to rethink the place that the human brain holds in nature and evolution, and to rewrite some basic concepts that are taught in textbooks. The human brain has just the number of neurons and nonneuronal cells that would be expected for a primate brain of its size, with the same distribution of neurons between its cerebral cortex and cerebellum as in other species, despite the relative enlargement of the former; it costs as much energy as would be expected from its number of neurons; and it may have been a change from a raw diet to a cooked diet that afforded us its remarkable number of neurons, possibly responsible for its remarkable cognitive abilities.

In other words, it appears as if our exceptional cognitive abilities are the direct result of having more neurons rather than neurons in differing arrangements or relative quantities. If this continues to be confirmed in subsequent analyses, it implies, all else equal, that scaling up the number of neurons in the human brain could lead to similar intelligence differentials as those between humans and chimps. Given the evidence above, this should be our default assumption -- we would need specific reasoning or evidence to assume otherwise.

A more detailed reason for why qualitatively smarter-than-human intelligence seems possible is that the higher intelligence of humans and primates appears to have something to do with self-awareness and complex self-referential loops in thinking and acting. The evolution of primate general intelligence appears correlated with the evolution of brain structures that control, manipulate, and channel the activity of other brain structures in a contingent way. For instance, a region called the pulvinar was called the brain's "switchboard operator" in a recent study, though there are dozens of brain areas that could be given this description. Of 52 Brodmann areas in the cortex, at least seven are "hub areas" which lie near the top of a self-reflective control hierarchy: areas 8, 9, 10, 11, 12, 25, and 28. Given that these areas obviously play important roles in what we consider higher intelligence, yet only evolved relatively recently in evolutionary terms and are comparatively poorly developed, it is quite plausible to suggest that there is a lot of room for improvement in these areas and that qualitative intelligence improvements could result.

Imagine a brain that has "hub areas" which can completely reprogram other brain modules on a fine-grained level, the sort of reprogramming and flexibility only currently available in computers. Instead of only being able to reprogram a few percent of the information content of our brains, like we have now, a mind that can reprogram 100 percent of its own information content would allow limitless room for fast, flexible cognitive adaptation. Such a mind could quickly reprogram itself to suit the task at hand. Biological intelligences can only dream of this kind of adaptiveness and versatility. It would open up a vast new space not only for functional cognition but also appreciation of aesthetics and other higher-order mental traits.

Superior Hardware and Software

Say that we could throw open the hood of the brain and enhance it. How would that work?

To understand how "smarter than human intelligence" would work requires overviewing how the brain works. The brain is a very complicated machine. It operates entirely according to the laws of physics, and includes specific modules designed to handle different tasks. Look at our capabilities of identifying faces; it is clear that our brains have specific neural hardware adapted to rapidly identifying human faces. We don't have the same hardware for rapidly identifying lizard faces -- every lizard is just a lizard. To a lizard, different lizard faces might intuitively appear highly distinct, but to us humans, a species wherein there is no adaptive value in differentiating lizard faces, they all look the same.

The paper "Intelligence Explosion: Evidence and Import" by Luke Muehlhauser and Anna Salamon reviews some features of what Eliezer Yudkowsky calls the "AI Advantage" -- inherent advantages that an Artificial Intelligence would have over human thinkers as a natural consequence of its digital properties. Because many of these properties are so key to understanding the "cognitive horsepower" behind claims of "thinkism", I've chosen to excerpt the entire section on "AI Advantages" here, minus references (you can find those in the paper):

Below we list a few AI advantages that may allow AIs to become not only vastly more intelligent than any human, but also more intelligent than all of biological humanity. Many of these are unique to machine intelligence, and that is why we focus on intelligence explosion from AI rather than from biological cognitive enhancement.

Increased computational resources. The human brain uses 85–100 billion neurons. This limit is imposed by evolution-produced constraints on brain volume and metabolism. In contrast, a machine intelligence could use scalable computational resources (imagine a “brain” the size of a warehouse). While algorithms would need to be changed in order to be usefully scaled up, one can perhaps get a rough feel for the potential impact here by noting that humans have about 3.5 times the brain size of chimps, and that brain size and IQ correlate positively in humans, with a correlation coefficient of about 0.35. One study suggested a similar correlation between brain size and cognitive ability in rats and mice.

Communication speed. Axons carry spike signals at 75 meters per second or less. That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly. (Of course, this also depends on the efficiency of the algorithms in use; faster hardware compensates for less efficient software.)

Increased serial depth. Due to neurons’ slow firing speed, the human brain relies on massive parallelization and is incapable of rapidly performing any computation that requires more than about 100 sequential operations. Perhaps there are cognitive tasks that could be performed more efficiently and precisely if the brain’s ability to support parallelizable pattern-matching algorithms were supplemented by support for longer sequential processes. In fact, there are many known algorithms for which the best parallel version uses far more computational resources than the best serial algorithm, due to the overhead of parallelization.

Duplicability. Our research colleague Steve Rayhawk likes to describe AI as “instant intelligence; just add hardware!” What Rayhawk means is that, while it will require extensive research to design the first AI, creating additional AIs is just a matter of copying software. The population of digital minds can thus expand to fill the available hardware base, perhaps rapidly surpassing the population of biological minds. Duplicability also allows the AI population to rapidly become dominated by newly built AIs, with new skills. Since an AI’s skills are stored digitally, its exact current state can be copied, including memories and acquired skills—similar to how a “system state” can be copied by hardware emulation programs or system backup programs. A human who undergoes education increases only his or her own performance, but an AI that becomes 10% better at earning money (per dollar of rentable hardware) than other AIs can be used to replace the others across the hardware base—making each copy 10% more efficient.

Editability. Digitality opens up more parameters for controlled variation than is possible with humans. We can put humans through job-training programs, but we can’t perform precise, replicable neurosurgeries on them. Digital workers would be more editable than human workers are. Consider first the possibilities from whole brain emulation. We know that transcranial magnetic stimulation (TMS) applied to one part of the prefrontal cortex can improve working memory. Since TMS works by temporarily decreasing or increasing the excitability of populations of neurons, it seems plausible that decreasing or increasing the “excitability” parameter of certain populations of (virtual) neurons in a digital mind would improve performance. We could also experimentally modify dozens of other whole brain emulation parameters, such as simulated glucose levels, undifferentiated (virtual) stem cells grafted onto particular brain modules such as the motor cortex, and rapid connections across different parts of the brain. Secondly, a modular, transparent AI could be even more directly editable than a whole brain emulation—possibly via its source code. (Of course, such possibilities raise ethical concerns.)

Goal coordination. Let us call a set of AI copies or near-copies a “copy clan.” Given shared goals, a copy clan would not face certain goal coordination problems that limit human effectiveness. A human cannot use a hundredfold salary increase to purchase a hundredfold increase in productive hours per day. But a copy clan, if its tasks are parallelizable, could do just that. Any gains made by such a copy clan, or by a human or human organization controlling that clan, could potentially be invested in further AI development, allowing initial advantages to compound.

Improved rationality. Some economists model humans as Homo economicus: self-interested rational agents who do what they believe will maximize the fulfillment of their goals. On the basis of behavioral studies, though, Schneider (2010) points out that we are more akin to Homer Simpson: we are irrational beings that lack consistent, stable goals. But imagine if you were an instance of Homo economicus. You could stay on a diet, spend the optimal amount of time learning which activities will achieve your goals, and then follow through on an optimal plan, no matter how tedious it was to execute. Machine intelligences of many types could be written to be vastly more rational than humans, and thereby accrue the benefits of rational thought and action. The rational agent model (using Bayesian probability theory and expected utility theory) is a mature paradigm in current AI design.

It seems likely to me that Kevin Kelly does not really understand the AI advantages of increased computational resources, communication speed, increased serial depth, duplicability, editability, goal coordination, and improved rationality, and how these abilities could be used to accelerate, miniaturize, parallelize, and prioritize experimentation to such a degree that the "calendar time" limitation could be surpassed. The calendar of a powerful AI superintelligence might be measured in microseconds rather than months. Different categories of beings have different calendars to which they are most accustomed. In the time it takes for a single human neuron to fire, a superintelligence might have decades of subjective time to contemplate the mysteries of the universe.

Nos es a Polim

Part of the initial insight that prompted the perspective that Kelly calls "thinkism" was that the brain is a machine which can be accelerated by porting the crucial algorithms on a different substrate, namely a computer, and running them faster. The brain works through algorithms -- that is, systematic procedures. For an example, take the visual cortex, the part of the brain that processes what you see. This region of the brain is actually relatively well understood. The first layers capture surface features such as lines, darkness, and light. Deeper layers make out shapes, then motion, then specifics such as which face belongs to which person. It gets so specific that scientists have measured individual neurons that recognize celebrities like Bill Clinton or Marilyn Monroe.

The algorithms that underlie our processing of visual information are understood on a basic level, and it is only a matter of time until all the other cognitive algorithms are understood as well. When they are, they will be implemented on computers and sped up by a factor of thousands or millions. Human neurons fire 200 times every second, computer chips fire 2,000,000,000 times every second.

What would it be like to be a mind running at ten million times human speed? If your mind is really really fast, events on the outside would seem really really slow. All the elapsed time from the founding of Rome to the present day could be experienced subjectively in about two hours. All the time from the emergence of Homo sapiens to the present day could experienced in a week. All the time from the dinosaurs to the present day could be experienced in a mere 2,400 years. Imagine how quickly a mind could accrue profound wisdom running at such an accelerated speed; the "wisdom" of a 90-year old would seem childlike by comparison.

To visualize concretely the kind of arrangement in which these minds could exist, imagine a computer a couple hundred feet across made of dense nanomachinery situated at the bottom of the ocean. Such a computer would have far more computing power than the entire planet today, similar to the way that a modern smartphone has more computing power than the entire world in 1960. Within this computer would exist virtual worlds practically without end; their combined volume far exceeding that of the solar system, or perhaps even the galaxy.

In his post, Kelly seems to acknowledge that minds could be vastly accelerated and magnified in this way: remarkably, he just doesn't think that this would translate to increased wisdom, performance, ability, or insight significantly beyond the human level. To me, at first impression, the notion that a ten million times speedup would have a negligible effect on scientific innovation or progress seems absurd. It appears obvious that it would have a world-transforming impact. Let's look at the argument more closely.

The disagreement between Singularitarians such as Vinge and Kurzweil and skeptics such as Kelly seems to be about what sorts of information-acquisition and generation procedures can be imported into this vastly accelerated world and which cannot. In his hard sci-fi book Diaspora, author Greg Egan calls the worlds of these enormously accelerated minds "polises", which make up the vast bulk of humanity in 2975. Vinge and Kurzweil see the process of knowledge acquisition and creation as being something that can in principle be sped up, brought "within the purview of the polis", whereas Kelly does not.

Above, I argued how the benefits of experimentation can be accelerated through the processes of running experiments faster, parallelizing them, using less matter, and choosing the right experiments. But what about less controversial information flow from world to polis? To build the polis to begin with, you'd have to be able to emulate -- not just simulate -- the human mind in detail, that is, copy all of its relevant properties. Since the human brain is one of, if not the most complex object in the universe that we know of, this also implies that a vast variety of less complex objects could be scanned and inputted to the polis in a similar fashion. Trees, for instance, could be mass-inputted into the virtual environment of the polis, consuming thousands or millions of times less computing power than the sentient inhabitants. It goes without saying that nonbiological, inanimate background features such as landscapes could be input into the polis with a bare minimum of challenge or difficulty.

Once a process can be simulated with a reasonable level of computing power, it can be inputted into the polis and run at a factor of tens-of-millions speedup. Newtonian physics, for instance. Today, we use huge computers to perform molecular dynamics simulations on aggregates of a few hundred atoms, simulating a few microseconds of their activity. With futuristic nanocomputers built by superintelligent Artificial Intelligences, macro-scale systems could be simulated for hours of activity for a very affordable cost in computing power. Such simulations would allow these intelligences to extract predictive regularities, or "rules of thumb" which would allow them to avoid simulating these systems in such excruciating detail in the future. Instead of requiring full-resolution molecular dynamics simulations to extrapolate the behavior of large systems, they might resolve a set of several thousand generalities that allow these systems to be predicted and understood with a high degree of confidence. This has essentially been the process of science for hundreds of years, but the "simulations" are instead direct observations. With enough computing power, fast simulations can be "similar enough" to real-life situations that genuine wisdom and insight can be derived from them.

Though real, physical experimentation will be needed to verify the performance of models, those facets of the models that are verified will be quickly internalized by the polis, allowing it to simulate real-world phenomena at millions of times the real-world speed. Once a facet of a real-world system is internalized, understanding it instantly becomes a matter of routine, just as today the design of a massive bridge has become a matter of routine, a factor of running calculations based on the known laws of physics. Though from our current perspective, the complexities of the world of biology seem intimidating, the capability of superintelligences to quickly conduct millions of experiments in parallel and internalize knowledge once it is acquired will quickly dissolve these challenges as our recent ancestors dissolved the challenge of precision engineering.

Summary

I have only scratched the surface of the reasons why innovation and progress by superintelligences will predictably surpass the "calendar time" with which humanity has grown so accustomed. As humans routinely perform cognitive feats that bewilder the brightest squirrel or meadow vole, superintelligent scientists and engineers will leave human scientists and engineers in the dust, as if our all prior accomplishments were scarcely worth mentioning. It may be psychologically challenging to come to terms with such a possibility, but it would really just be the latest in an ongoing trend of human vanity being upset by the realities of a godless cosmos.

The Singularity is something that our generation needs to worry about -- in fact, it may be the most important task we face. If we are going to create higher intelligence, we want it on our side. The benefits of success would be beyond our capacity to imagine, and will likely include the end of scarcity, war, disease, and suffering of all kinds, and the opening up of a whole new cognitive and experiential universe. The challenge is an intimidating one, but one that our best will rise to meet.

Filed under: singularity 41 Comments
5Sep/1251

Comprehensive Copying Not Required for Uploading

Recently, there was some confusion by biologist P.Z. Myers regarding the Whole Brain Emulation Roadmap report of Anders Sandberg and Nick Bostrom at the Future of Humanity Institute.

The confusion arose when Prof. Myers made incorrect assumptions about the 130-page roadmap from reading a 2-page blog post by Chris Hallquist. Hallquist wrote:

The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.

If this process manages to give you a sufficiently accurate simulation

Prof. Myers objected vociferously, writing, "It won’t. It can’t.", subsequently launching into a reasonable attack against the notion of scanning a living human brain at nanoscale resolution with current fixation technology. The confusion is that Prof. Myers is criticizing a highly specific idea, the notion of exhaustively simulating every axon and dendrite in a live brain, as if that were the only proposal or even the central proposal forwarded by Sandberg and Bostrom. In fact, on page 13 of the report, the authors present a table that includes 11 progressively more detailed "levels of emulation", ranging from simulating the brain using high-level representational "computational modules" to simulating the quantum behavior of individual molecules. In his post, Myers writes as if the 5th level of detail, simulating all axons and dendrites, is the only path to whole brain emulation (WBE) proposed in the report (it isn't), and also as if the authors are proposing that WBE of the human brain is possible with present-day fixation techniques (they aren't).

In fact, the report presents Whole Brain Emulation as a technological goal with a wide range of possible routes to its achievement. The narrow method that Myers criticizes is only one approach among many, and not one that I would think is particularly likely to work. In the comments section, Myers concurs that another approach to WBE could work perfectly well:

This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.

But, the report does not mandate that a "brute force dismantling and reconstruction of every cell in the brain" is the only way forward for uploading. This makes it look as if Myers did not read the report, even though he claims, "I read the paper".

Slicing and scanning a brain will be necessary but by no means sufficient to create a high-detail Whole Brain Emulation. Surely, it is difficult to imagine how the salient features of a brain could be captured without scanning it in some way.

What Myers seems to be objecting to is a kind of dogmatic reductionism, "brain in, emulation out" direct scanning approach that is not actually being advocated by the authors of the report. The report is non-dogmatic, writing that a two-phase approach to WBE is required, where "The first phase consists of developing the basic capabilities and settling key research questions that determine the feasibility, required level of detail and optimal techniques. This phase mainly involves partial scans, simulations and integration of the research modalities." In this first phase, there is ample room for figuring out what the tissue actually does. Then, that data can be used simplify the scanning and representation process. The required level of understanding vs. blind scan-and-simulate is up for debate, but few would claim that our current neuroscientific level of understanding suffices.

Describing the difficulties of comprehensive scanning, Myers writes:

And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue?

Measuring the epigenetic state of every nucleus is not likely to be required to create convincing, useful, and self-aware Whole Brain Emulations. No neuroscientist familiar with the idea has ever claimed this. The report does not claim this, either. Myers seems to be inferring this claim himself through his interpretation of Hallquist's brusque 2-sentence summary of the 130-page report. Hallquist's sentences need not be interpreted this way -- "slicing and scanning" the brain could be done simply to map neural network patterns rather than to capture the epigenetic state of every nucleus.

Next, Myers objects to the idea that brain emulations could operate at faster-than-human speeds. He responds to a passage in "Intelligence Explosion: Evidence and Import", another paper cited in the Hallquist post which claims, "Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly." To this, Myers says:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed… how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

At first read, it almost seems in this objection as if Prof. Myers does not understand the concept that software can be run faster if it is running on a faster computer. After reading this post carefully, it doesn't seem as if this is what he actually means, but since the connotation is there, the point is worth addressing directly.

Software is a series of electric signals passing through logic gates on computers. The software is agnostic to the processing speed of the underlying computer. The software is a pattern of electrons. The pattern is there whether the clock speed of the processor is 2 kHz or 2 GHz. When and if software is ported from a 2 kHz computer to a 2 GHz computer, it does not stand up and object to this "tweaking the clock speed". No "waving of hands" is required. The software may very well be unable to detect that the substrate has changed. Even if it can detect the change, it will have no impact on its functioning unless the programmers especially write code that makes it react.

Speed change in software is allowed. If the hardware can support the speed change, pressing a button is all it takes to speed the software up. This is a simple point.

The crux of Myers' objection seems to actually be about the interaction of the simulation with the environment. This objection makes much more sense. In the comments, Carl Shulman responds to Myers' objection:

This seems to assume, contrary to the authors, running a brain model at increased speeds while connected to real-time inputs. For a brain model connected to inputs from a virtual environment, the model and the environment can be sped up by the same factor: running the exact same programs (brain model and environment) on a faster (serial speed) computer gets the same results faster. While real-time interaction with the outside would not be practicable at such speedup, the accelerated models could still exchange text, audio, and video files (and view them at high speed-up) with slower minds.

Here, there seems to be a simple misunderstanding on Myers' part, where he is assuming that Whole Brain Emulations would have to be directly connected to real-world environments rather than virtual environments. The report (and years of informal discussion on WBE among scientists) more or less assumes that interaction with the virtual environment would be the primary stage in which the WBE would operate, with sensory information from an (optional) real-world body layered onto the VR environment as an addendum. As the report describes, "The environment simulator maintains a model of the surrounding environment, responding to actions from the body model and sending back simulated sensory information. This is also the most convenient point of interaction with the outside world. External information can be projected into the environment model, virtual objects with real world affordances can be used to trigger suitable interaction etc."

It is unlikely that an arbitrary WBE would be running at a speed that lines it up precisely with the 200 Hz firing rate of human neurons, the rate at which we think. More realistically, the emulation is likely to be much slower or much faster than the characteristic human rate, which exists as a tiny sliver in a wide expanse of possible mind-speeds. It would be far more reasonable -- and just easier -- to run the WBE in a virtual environment with a speed suited to its thinking speed. Otherwise, the WBE would perceive the world around it running at either a glacial pace or a hyper-accelerated one, and have a difficult time making much sense of either.

Since the speed of the environment can be smoothly scaled with the speed of the WBE, the problems that Myers cites with respect to "turn[ing] it up to 11" can be duly avoided. If the mind is turned up to 11, which is perfectly possible given adequate computational resources, then the virtual environment can be turned up to 11 as well. After all, the computational resources required to simulate a detailed virtual environment would pale in comparison to those required to simulate the mind itself. Thus, the mind can be turned up to 11, 12, 13, 14, or far beyond with the push of a button, to whatever level the computing hardware can support. Given the historic progress of computing hardware, this may well eventually be thousands or even millions of times the human rate of thinking. Considering minds that think and innovate a million times faster than us might be somewhat intimidating, but there it is, a direct result of the many intriguing and counterintuitive consequences of physicalism.