This is the most important brain-computer interfacing breakthrough in a long time, possibly in several years:
- Neural probes with ultrathin, soft microfluidic channels coupled to micro-ILEDs
- Optofluidic probes minimize tissue damage and are suitable for chronic implants
- Wireless in vivo fluid delivery of viruses, peptides, and small-molecule agents
- Combined wireless optogenetics with pharmacology for neural circuit dissection
In vivo pharmacology and optogenetics hold tremendous promise for dissection of neural circuits, cellular signaling, and manipulating neurophysiological systems in awake, behaving animals. Existing neural interface technologies, such as metal cannulas connected to external drug supplies for pharmacological infusions and tethered fiber optics for optogenetics, are not ideal for minimally invasive, untethered studies on freely behaving animals. Here, we introduce wireless optofluidic neural probes that combine ultrathin, soft microfluidic drug delivery with cellular-scale inorganic light-emitting diode (micro-ILED) arrays. These probes are orders of magnitude smaller than cannulas and allow wireless, programmed spatiotemporal control of fluid delivery and photostimulation. We demonstrate these devices in freely moving animals to modify gene expression, deliver peptide ligands, and provide concurrent photostimulation with antagonist drug delivery to manipulate mesoaccumbens reward-related behavior. The minimally invasive operation of these probes forecasts utility in other organ systems and species, with potential for broad application in biomedical science, engineering, and medicine.
Want to see this site active again? Donate here, you can personally make it happen. I bring you the latest news of transhumanist technologies with expert commentary.
You can also contribute via Bitcoin to 1PH4k2QAqZ1zC8BnpZuBuMczQhmYUXz4BV.
Although physical enhancement is what most people associate with transhumanism, it's not particularly interesting. A man with tentacles and wings who can fly and breathe underwater is still just some dude. Humans are primitive beings, with conspicuously primitive minds -- we just recently evolved from un-intelligent apes that used the same stone tools for millions of years.
Everything truly exciting about the transhumanist project lies in the mental realm. Only through opening up and intervening in the brain can we really change ourselves and the way the world works. Anything else is just the surface.
What approaches can we take to cognitive enhancement?
First, take brain surgery. It is extremely unlikely that cognitive enhancement will be conducted through conventional brain surgery as is practiced today. These procedures are inherently risky and only conducted under necessary circumstances, when the challenges of surgery outweigh the huge cost, substantial risk, and long recovery time of the procedures.
More subtle than brain surgery is optogenetics, regarded by some as the scientific breakthrough of the last decade. Optogenetics allows researchers to control the precise activation of neurons through the introduction of light-sensitive genes to animal brain tissue.
Optogenetics is unlikely to be applied to humans before 2030-2040, for two reasons. The first is that it involves the introduction of foreign genes into human brain tissue, and gene therapy is in its infancy -- treatments derived from gene therapy are extremely rare and highly experimental. People have been killed by gene therapy gone awry. When gene therapy research moves in the direction of human enhancement, a massive backlash seems plausible. It may be banned entirely for enhancement purposes.
At the very least, the short-lived nature of gene therapy and problems with viral vectors ensure that gene therapy will stay experimental until entirely new vectors are developed. Chromallocytes are the ideal gene delivery vector, but those are quite far off. Is there something between current vectors and chromallocytes that produces safe, predictable gene therapy results? That is a great big question mark. What is needed is not one or two breakthroughs, but a long series of many breakthroughs. I challenge readers to find anyone in biotech who would bet that gene therapy will be made safe, predictable, and approved for use in humans within 10 years, 20 years, or 30. Developing new basic capabilities in biotech is a long, drawn out process.
The second reason optogenetics will not bear fruit for cognitive enhancement before 2030-2040 is that it requires slicing off part of the scalp and mounting fiber optics directly on the skull. This is all well and good for animals, which we torment with abandon, but it seems unlikely to be popular among the Homo sapiens crowd. Mature regenerative medicine would be necessary to heal tissue damage from this procedure.
According to Ray Kurzweil's scenario, "nanobots" will be developed during the late 2020s which will be injected into the human body by the trillions, where they can link up with neurons and augment the brain from the inside.
However, given the near complete lack of progress towards molecular nanotechnology since Eric Drexler wrote Engines of Creation in 1986, I find this hard to believe. Nanobots require nanofactories, nanofactories require assemblers, and assemblers would be highly complex aggregates of millions of molecules that themselves would need to be manufactured to atomic precision. Today, all objects manufactured to molecular precision have negligible complexity. The imaging tools that exist today -- and for the foreseeable future -- are far too imprecise to allow for troubleshooting molecular systems of non-negligible size and complexity that refuse to behave as intended. The more precise the imaging method, the more energy is delivered to the molecular structure, and the more likely it is to be blown into a million little pieces.
It is difficult to understate how far we are from developing autonomous nanobots with the ability to perform complex tasks in a living human body. There is no reason to expect a smooth path from today's autonomous MEMS (micro-electro-mechanical systems) to the "nanobots" of futurist anticipation. Autonomous MEMS are early in their infancy. Assemblers are probably a necessary prerequisite to miniature robotics with the power to enhance human cognition. No one has designed anything close to an assembler, and if progress continues as it has for the last 25 years, it will be many decades before one is developed.
So, that is three technologies that I have argued will not be applied to cognitive enhancement in the foreseeable future -- brain surgery, optogenetics, and nanobots.
Another university is opening up a BCI lab, University of Washington. It makes sense because it's near the Allen Institute for Brain Science, among other reasons. Did I mention that Christof Koch, the new Chief Science Officer of the Allen Institute, will be speaking at Singularity Summit?
Here's an excerpt of the news release:
The National Science Foundation today announced an $18.5 million grant to establish an Engineering Research Center for Sensorimotor Neural Engineering based at the University of Washington.
â€œThe center will work on robotic devices that interact with, assist and understand the nervous system,â€ said director Yoky Matsuoka, a UW associate professor of computer science and engineering. â€œIt will combine advances in robotics, neuroscience, electromechanical devices and computer science to restore or augment the bodyâ€™s ability for sensation and movement.â€
The text is pretty generic boilerplate, it's just the action that is important. We will likely have to wait a year or more before any interesting breakthroughs from this lab hit the news.
From Science Daily: A new retinal prosthetic creates an image (middle) that more accurately reconstructs a baby's face (left) than the standard approach (right).
Researchers have developed an artificial retina that has the capacity to reproduce normal vision in mice. While other prosthetic strategies mainly increase the number of electrodes in an eye to capture more information, this study concentrated on incorporating the eye's neural "code" that converts pictures into signals the brain can understand.
Degenerative diseases of the retina -- nerve cells in the eye that send visual information to the brain -- have caused more than 25 million people worldwide to become partially or totally blind. Although medicine may slow degeneration, there is no known cure. Existing retinal prosthetic devices restore partial vision; however, the sight is limited. Efforts to improve the devices have so far largely focused on increasing the number of cells that are re- activated in the damaged retina.
This is a major BCI advance. Prior visual reconstruction implants had a much lower resolution. Within a couple decades it could become possible to use implants like this to generate and share complex virtual realities just "beamed" into one another's heads. Ramez Naam's More Than Human describes a similar technology and examines how it could be used to enhance human collaboration and the creative process.
If you can reconstruct a real scene and beam it to the brain, then you can also produce fake scenes if you have the right programming.
Raising giant insects of unravel ancient oxygen
The electronics for smart implants
SENS Foundation post on how resveratrol does not extend lifespan
Brian Wang reports on Zyvex progress in nanotechnology
How 3-D printing is transforming the toy industry
"Skin printer" could help heal battlefield wounds
Self-assembly revolutionizes metamaterial manufacture
Transgenic worms make tough fibers
Magnetic test reveals hyperactive brain network responsible for involuntary flashbacks
Controlling individual cortical nerve cells by human thought
Learning the truth not effective in battling rumors about NYC mosque, study finds
Fingers detect typos even when conscious brain doesn't
'Wireless' humans could form backbone of new mobile networks
Optical technique reveals unnexpected complexity in mammalian olfactory coding
Carbon nanotube thermopower achieving high specific power over seven times higher than lithium batteries
George Dvorsky: Why life extensionists need to be concerned about neurological diseases
ASIM Experts Series: Brain-Machine Interfacing: Current Work and Future Directions, by Max Hodak, October 17, 2010
"ASIM" stands for Advancing Substrate Independent Minds, the field previously known as mind uploading, though ASIM can be construed as broader. ASIM is the focus of Carboncopies, a new non-profit founded by Suzanne Gildert (now at D-Wave) and Randal Koene (Halcyon Molecular). Randal and I work at the same company so I get to see him in the lunch room now.
Brain-machine interfacing: current work and future directions
Max Hodak - http://younoodle.com/people/max_hodak
Abstract: Fluid, two-way brain-machine interfacing represents one of the greatest challenges of modern bioengineering. It offers the potential to restore movement and speech to the locked-in, and ultimately allow us as humans to expand far beyond the biological limits we're encased in now. But, there's a long road ahead. Today, noninvasive BMIs are largely useless as practical devices and invasive BMIs are critically limited, though progress is being made everyday. Microwire array recording is used all over the world to decode motor intent out of cortex to drive robotic actuators and software controls. Electrical intracortical microstimulation is used to "write" information to the brain, and optogenetic methods promise to make that easier and safer. Monkey models can perform tasks from controlling a walking robot to feeding themselves with a 7-DOF robotic arm. Before we'll be able to make the jump to humans, biocompatibility of electrodes and limited channel counts are significant hurdles that will need to be crossed. These technologies are still in their infancy, but they're a huge opportunity in science for those motivated to help bring them through to maturity.
Max Hodak is a student of Miguel Nicolelis, the well-known BMI engineer.
Apparently it's on its way. That should increase popular familiarity with brain-computer interfacing.
MIT Professor Ed Boyden Describes a Revolutionary New Brain-Computer Interfacing Method (Video from SS09)
MIT’s Ed Boyden at Singularity Summit 2009 — Synthetic Neurobiology: Optically Engineering the Brain to Augment Its Function
Here's another interesting talk, this one by rising MIT star Ed Boyden on directly interfacing with the brain via optical signals.
See Wired's coverage.
See my post from last year on a "dream machine".
See also Ramez Naam's More Than Human, several chapters of which are devoted to a hypothetical brain implant that allows people to share their imaginations quickly and easily.
The vast majority of all thought is wasted because we forget what we were thinking. There is no record unless we write it all down.
Some form of electronic telepathy already exists, but it is crude. Ambient Corp's neckband lets you speak without opening your mouth. The system only knows 150 words.
In the longer term, it may be possible to use a similar technology to make a constant transcript of thoughts in realtime. This article from PopSci mentions:
Neuroscientists are already able to read some basic thoughts, like whether an individual test subject is looking at a picture of a cat or an image with a specific left or right orientation. They can even read pictures that you're simply imagining in your mind's eye. Even leaders in the field are shocked by how far we've come in our ability to peer into people's minds.
Did you know that we can already read basic thoughts? The PopSci article is optimistic about timeframes -- it sort of has to be, because it is a magazine made for entertainment. (And generally untrustworthy, like its cousin New Scientist.) Mind reading technology may be somewhat far off (or possibly not), but it certainly has interesting implications. I am curious about combining mind-reading technology with augmented reality to open up exciting new forms of collaboration and gaming. There could be major breakthroughs in that area within a decade, if we are lucky.
Check out this interesting comparison of consumer brain-computer interfaces on Wikipedia.
Emotiv is apparently coming out with their EPOC in Q4 2009. The page says:
Due to the complex detection algorithms involved, there is a slight lag in detecting thoughts, making them more suitable for use in games like Harry Potter than FPS games.
Obviously faster computers would help here. People ask about the economic motivations for why the average person would want faster computers. This could become one of them.