Microscopy is all about tradeoffs between the size of an imaged volume and spatial and temporal resolution. That is, until now. A new microscopy technique invented by researchers at the University of Vienna and MIT allows scientists to comprehensively image the neural firings of a living roundworm brain in realtime, vastly increasing the amount of data we can collect.
This is the first time a microscopy technique has been used to measure neural activity in an entire animal in realtime before. The principle behind its operation is similar to how the "bullet time" sequence in The Matrix was filmed, but with all the cameras returning data at the same time, and the sample being transparent.
In the filming of The Matrix, a series of cameras around Keanu Reeves captured his movements as he falls dramatically backwards, dodging bullets while the camera angle spins around him. A light field microscope is similar. It's like a normal optical microscope, but it consists of a series of microlenses which beam back optical data from different angles around the sample in realtime. Later on, a powerful computer uses a sophisticated algorithm to reconstruct a high-resolution 3D model. The light field microscope already existed before this; the breakthrough was the reconstruction algorithm.
Prior to this, 3D imaging approaches were limited by the amount of time it takes for microscopes sophisticated enough to capture an entire roundworm at once to scan. Usually, this would be about ten times a second. The new approach operates at 50 Hz, or 50 times a second. This is fast enough to pick up all the nuances of neural activity in the densely packed flatworm brain. The axial spatial precision is up to 1.4 microns.
The light field microscope is relatively affordable, and the algorithm will now be used in behavioral studies of flatworms. This will allow researchers to expose the flatworms to precise stimuli and record their exact neural responses. It brings us closer to creating an exhaustive simulation, or algorithm representing the roundworm itself. The paper calls it, "tools for non-invasive interrogation of neuronal circuits with high spatio-temporal resolution."
A longer-range objective will be to create an imaging system with such a wide field of view that it can observe the flatworm in a free-moving environment, rather than in just one place, as it is now. With the right improvements, the same principle could even be used to observe the neural activity of a population of interacting flatworms. The technique could also be used to observe other small, transparent organisms, such as zebrafish larvae.
A Stanford project previously created a high-resolution light field microscope for imaging the neural activity of an entire small organism, but it did not have a high enough frame rate to observe all the activity in realtime. The online updates for this project ended in 2008. Now, thanks to MIT and University of Vienna researchers, their original vision has been achieved.
Hopefully, these principles will be extended to observe larger and more complicated organisms. Pretty soon we could be looking at the cognitive algorithms of small animals in a much clearer way, and elucidating key details on the fundamentals of cognition. The time of full-brain dynamic circuit-mapping of neural processes is nigh!
Is Google working towards a heads-up-display built into a contact lens? It sure looks like they're heading in that direction, with a contact lens that measures blood glucose level in tears using a tiny censor.
The initial focus is on helping people with diabetes. Diabetics have to measure their blood glucose levels several times a day, which usually involves pricking their finger and drawing blood, a painful routine. By using a contact lens that measures their blood glucose and gives them the heads-up, they can avoid the chore.
Google engineers described the electronics in the contact lens as so small that they look like "bits of glitter," along with an antenna "thinner than a human hair". Though the contact lens is currently just a prototype, the engineers are "exploring integrating tiny LED lights that could light up to indicate that glucose levels have crossed above or below certain thresholds."
The smart lens consists of sensors sandwiched between two soft layers. A pinhole in the lens allows the tear fluid to make contact with the sensor. Besides the censor, the lens contains a capacitor, a controller, and a miniscule RFID chip which beams it power from external devices.
Measuring blood glucose levels must be done frequently for diabetics because they fluctuate so often. "Glucose levels change frequently with normal activity like exercising or eating or even sweating," say the project co-founders Brian Otis and Babak Parviz. "Sudden spikes or precipitous drops are dangerous and not uncommon, requiring round-the-clock monitoring."
This new contact lens has to overcome novel challenges, such as strict safeguards against the lens overheating or being hackable from the outside. For the 25.8 million Americans living with diabetes, getting accurate readings is a matter of life and death.
Babak Parvitz, one of the leaders of the project, was one of the first people in the world to work on smart contact lenses. Parvitz was formerly a professor at the University of Washington, where he collaborated with Microsoft Research on a similar lens with was unveiled in 2011. The project had no follow-up, however.
Various smart contact lenses, such as Sensimed Triggerfish, already exist in Europe, but they are not cleared by the FDA for sales in the United States. A group at Sweden's Malmo University has developed a contact lens that runs on a fuel cell powered by tears. The fuel cell uses a small amount of ascorbate in tears to generate electricity, similar to the way in which a lemon can power a light bulb if you stick wires into it.
In The Age of Spiritual Machines (1999), futurist Ray Kurzweil, who now works at Google, predicted that by the year 2019 we would have smart contact lenses which include retinal displays that project virtual reality images onto the eye. With Google's latest announcement, it appears that we are moving closer to this future.
Although physical enhancement is what most people associate with transhumanism, it's not particularly interesting. A man with tentacles and wings who can fly and breathe underwater is still just some dude. Humans are primitive beings, with conspicuously primitive minds -- we just recently evolved from un-intelligent apes that used the same stone tools for millions of years.
Everything truly exciting about the transhumanist project lies in the mental realm. Only through opening up and intervening in the brain can we really change ourselves and the way the world works. Anything else is just the surface.
What approaches can we take to cognitive enhancement?
First, take brain surgery. It is extremely unlikely that cognitive enhancement will be conducted through conventional brain surgery as is practiced today. These procedures are inherently risky and only conducted under necessary circumstances, when the challenges of surgery outweigh the huge cost, substantial risk, and long recovery time of the procedures.
More subtle than brain surgery is optogenetics, regarded by some as the scientific breakthrough of the last decade. Optogenetics allows researchers to control the precise activation of neurons through the introduction of light-sensitive genes to animal brain tissue.
Optogenetics is unlikely to be applied to humans before 2030-2040, for two reasons. The first is that it involves the introduction of foreign genes into human brain tissue, and gene therapy is in its infancy -- treatments derived from gene therapy are extremely rare and highly experimental. People have been killed by gene therapy gone awry. When gene therapy research moves in the direction of human enhancement, a massive backlash seems plausible. It may be banned entirely for enhancement purposes.
At the very least, the short-lived nature of gene therapy and problems with viral vectors ensure that gene therapy will stay experimental until entirely new vectors are developed. Chromallocytes are the ideal gene delivery vector, but those are quite far off. Is there something between current vectors and chromallocytes that produces safe, predictable gene therapy results? That is a great big question mark. What is needed is not one or two breakthroughs, but a long series of many breakthroughs. I challenge readers to find anyone in biotech who would bet that gene therapy will be made safe, predictable, and approved for use in humans within 10 years, 20 years, or 30. Developing new basic capabilities in biotech is a long, drawn out process.
The second reason optogenetics will not bear fruit for cognitive enhancement before 2030-2040 is that it requires slicing off part of the scalp and mounting fiber optics directly on the skull. This is all well and good for animals, which we torment with abandon, but it seems unlikely to be popular among the Homo sapiens crowd. Mature regenerative medicine would be necessary to heal tissue damage from this procedure.
According to Ray Kurzweil's scenario, "nanobots" will be developed during the late 2020s which will be injected into the human body by the trillions, where they can link up with neurons and augment the brain from the inside.
However, given the near complete lack of progress towards molecular nanotechnology since Eric Drexler wrote Engines of Creation in 1986, I find this hard to believe. Nanobots require nanofactories, nanofactories require assemblers, and assemblers would be highly complex aggregates of millions of molecules that themselves would need to be manufactured to atomic precision. Today, all objects manufactured to molecular precision have negligible complexity. The imaging tools that exist today -- and for the foreseeable future -- are far too imprecise to allow for troubleshooting molecular systems of non-negligible size and complexity that refuse to behave as intended. The more precise the imaging method, the more energy is delivered to the molecular structure, and the more likely it is to be blown into a million little pieces.
It is difficult to understate how far we are from developing autonomous nanobots with the ability to perform complex tasks in a living human body. There is no reason to expect a smooth path from today's autonomous MEMS (micro-electro-mechanical systems) to the "nanobots" of futurist anticipation. Autonomous MEMS are early in their infancy. Assemblers are probably a necessary prerequisite to miniature robotics with the power to enhance human cognition. No one has designed anything close to an assembler, and if progress continues as it has for the last 25 years, it will be many decades before one is developed.
So, that is three technologies that I have argued will not be applied to cognitive enhancement in the foreseeable future -- brain surgery, optogenetics, and nanobots.
Marijuana is just the beginning. Soon, systems like this will be able to help people identify other plants, products, textures, landscapes, sounds, and locations.
Ever wonder what type of marijuana you have? There's an app for that, and it's called StrainBrain-the newest creation from the Medical Cannabis Network (MCN). At StrainBrain.com, medical marijuana patients can upload pictures of their cannabis, and the web application will use a proprietary software system (similar to facial recognition technology) to automatically identify the strain and its medical uses, show locations where the strain can be purchased legally, and provide strain suggestions for similar strains. This is the first time in history that facial recognition technology has been applied to the cannabis industry, making StrainBrain the most sophisticated marijuana reviews site ever created.
And yeah, this is not referring to artificially intelligent marijuana...
I find such statements inspiring whether they are meant seriously or not, and whether they come true or not.
Elon Musk shows that you can be rich and spend a lot of money without increasing net existential risk. Not increasing it, not decreasing it, just... risk-free behavior.
Space travel does not significantly lower the probability of existential risk because the majority of the probability mass is occupied by human-indifferent superintelligence, which can casually reach into space if it wants to. Also, self-sufficient space colonies are very far off. You need something miles across at a cost of tens of millions of dollars given current technology.
Another point I've made in the past is that as everyone becomes uploads and accelerates their thinking speeds, space will begin to seem very far away. Right now, Luna is 3-4 days away. To beings whose brains are made up of molecular computers with 100 GHz switching speeds, Luna is about 3,000,000,000 days away. That's about eight million years. An eight million year trip to go to an empty wasteland without any art, culture, or much Kolmogorov complexity to speak of beyond geological and mineral patterns?
The near-term future of humanity is to convert the Earth into a "computronium globe" with a web of trillions of simulated worlds within it. In several subjective millennia, we may consume the Moon, but it will be subjective millions of years beyond that until we colonize Mars. In many billions of years, we may be fortunate enough to consume the Sun.
From Science Daily:
Scientists have claimed one of the milestones in the drive for sustainable energy -- development of the first practical artificial leaf. Speaking in Anaheim, California at the 241st National Meeting of the American Chemical Society, they described an advanced solar cell the size of a poker card that mimics the process, called photosynthesis, that green plants use to convert sunlight and water into energy.
"A practical artificial leaf has been one of the Holy Grails of science for decades," said Daniel Nocera, Ph.D., who led the research team. "We believe we have done it. The artificial leaf shows particular promise as an inexpensive source of electricity for homes of the poor in developing countries. Our goal is to make each home its own power station," he said. "One can envision villages in India and Africa not long from now purchasing an affordable basic power system based on this technology."
Wired reports that the leaf is ten times more efficient than a real leaf.
I am quoted in the current featured article in the online edition of The Week, about thorium nuclear power:
Why are fans so excited about it?
Thorium-fueled reactors are supposed to be much safer than uranium-powered ones, use far less material (1 metric ton of thorium gets as much bang as 200 metric tons of uranium, or 3.5 million metric tons of coal), produce waste that is toxic for a shorter period of time (300 years vs. uranium's tens of thousands of years), and is hard to weaponize. In fact, thorium can even feed off of toxic plutonium waste to produce energy. And because the biggest cost in nuclear power is safety, and thorium reactors can't melt down, argues Michael Anissimov in Accelerating Future, they will eventually be much cheaper, too.
Thorium addresses the biggest safety concerns: proliferation and meltdown, which would make the plants much less attractive as terrorist targets as well.
Here's a quote from a NASA paper, "High Efficiency Nuclear Power Plants Using Liquid Fluoride Thorium Reactor Technology":
As a result fission fragment waste products are reduced by a commensurate amount, and their radioactivity would decay to background levels in less than 300 years, as contrasted to over 10,000 years for currently used reactors, thus obviating the need for long term storage, such as at Yucca Mountain. The thermal spectrum LFTR concept is inherently safe, with a negative temperature coefficient of reactivity, thus making a "core meltdown" due to loss of coolant impossible. Since the fuel is a pumped liquid solution of LiF-BeF2-UF4, refueling can be accomplished without reactor shutdown. The fissile fuel can also be made "proliferation resistant" by permitting it to be contaminated (denatured) with small amounts of U232 to increase its dose rate which would greatly reduce its unshielded exposure time and greatly increase detectability.
With Thorium ores, such as Monazite, being four times more abundant in the earth's crust than uranium ores, over 60 percent of the worlds resources are located in the following democratically governed countries: Australia (18 percent), United States (16 percent), India (13 percent), Brazil (9 percent), and Norway (5 percent). Thus future global energy demands could be met by these Thorium sources for over several tens of millennia.
China and India are investing heavily in thorium. USA is not, because Americans are irrationally afraid of the word "nuclear". Great job, America.
In a recent post I made on "Anonymous", commenter "mightygoose" said:
i would agree with matt, having delved into various IRC channels and metaphorically walked among anonymous,i would say that they are fully aware that they have no head, no leadership, and while you can lambast their efforts as temporary nuisance, couldnt the same be said for any form of protest (UK students for example) and the effective running of government.
They are dependent on tools and infrastructure provided by a small, elite group. If it weren't for this infrastructure, 99% of them wouldn't even have a clue about how to even launch a DDoS attack.
A week ago in the Financial Times:
However, a senior US member of Anonymous, using the online nickname Owen and evidently living in New York (Xetra: A0DKRK - news) , appears to be one of those targeted in recent legal investigations, according to online communications uncovered by a private security researcher.
A co-founder of Anonymous, who uses the nickname Q after the character in James Bond, has been seeking replacements for Owen and others who have had to curtail activities, said researcher Aaron Barr, head of security services firm HBGary Federal.
Mr Barr said Q and other key figures lived in California and that the hierarchy was fairly clear, with other senior members in the UK, Germany, Netherlands, Italy and Australia.
Of a few hundred participants in operations, only about 30 are steadily active, with 10 people who "are the most senior and co-ordinate and manage most of the decisions", Mr Barr told the Financial Times. That team works together in private internet relay chat sessions, through e-mail and in Facebook groups. Mr Barr said he had collected information on the core leaders, including many of their real names, and that they could be arrested if law enforcement had the same data.
Many other investigators have also been monitoring the public internet chats of Anonymous, and agree that a few seasoned veterans of the group appear to be steering much of its actions.
Yes... just like I already said in December. There may be many participants in Anonymous that would like to believe that they have no leadership, no head, but the fact is that any sustained and effective effort of any kind requires leadership.
It's funny how some people like to portray Anonymous as some all-wise decentralized collective, but like I said, if /b/ were shut down, they would all scatter like a bunch of ants. Anonymous has the weakness that it isn't unified by any coherent philosophy. This is not any kind of intellectual group. In contrast, groups like Transhumanism, Bayesianism, and Atheism are bound together by central figures, ideas, texts, and physical meetings.
The Brain Systems, Connections, Associations, and Network Relationships (a phrase with more words than strictly necessary in order to bootstrap a good acronym) assumes that somewhere in all the chaos and noise of the more than 20 million papers on PubMed, there must be some order and rationality.
To that end, we have created a dictionary of hundreds of brain region names, cognitive and behavioral functions, and diseases (and their synonyms!) to find how often any two phrases co-occur in the scientific literature. We assume that the more often two terms occur together (at the exclusion of those words by themselves, without each other), the more likely they are to be associated.
Are there problems with this assumption? Yes, but we think you'll like the results anyway. Obviously the database is limited to the words and phrases with which we have populated it. We also assume that when words co-occur in a paper, that relationship is a positive one (i.e., brain areas A and B are connected, as opposed to not connected). Luckily, there is a positive publication bias in the peer-reviewed biomedical sciences that we can leverage to our benefit (hooray biases)! Furthermore, we cannot dissociate English homographs; thus, a search for the phrase "rhythm" (to ascertain the brain regions associated with musical rhythm) gives the strongest association with the suprachiasmatic nucleus (that is, for circadian rhythms!)
Despite these limitations, we believe we have created a powerful visualization tool that will speed research and education, and hopefully allow for the discovery of new, previously unforeseen connections between brain, behavior, and disease.
Carl Zimmer wrote this: "Can You Live Forever? Maybe Not -- But You Can Have Fun Trying". This is a very positive, yet slightly skeptical look at the Singularity movement. This article is a follow-up to Zimmer's earlier article in Playboy, which came out this January. This year, there have been articles on the Singularity Summit and Singularity Institute in Playboy, GQ, the UK Independent, and Scientific American. Here's a funny bit from the current article:
After the meeting I decided to visit to researchers working on the type of technology that people such as Kurzweil consider the steppingstones to the Singularity. Not one of them takes Kurzweil's own vision of the future seriously. We will not have some sort of cybernetic immortality in the next few decades. The human brain is far too mysterious and computers far too crude for such a union anytime soon, if ever. In fact some scientists regard all this talk of the Singularity as a reckless promise of false hope to the afflicted.
But when I asked these skeptics about the future, even their most conservative visions were unsettling: a future in which people boost their brains with enhancing drugs, for example, or have sophisticated computers implanted in their skulls for life. While we may never be able to upload our minds into a computer, we may still be able to build computers based on the layout of the human brain. I can report I have not drunk the Singularity Kool-Aid, but I have taken a sip.
Taking a sip is a subset of drinking.