The Future of Technology

swan_aisociety_banner.png

We usually think about growth in linear and exponential models… but the biggest impacts come from discontinuous change, and history is a chain of discontinuities. At the Artificial Intelligence and Society event hosted by the University of Santa Clara and Singularity Institute for Artificial Intelligence, Melanie Swan presented on potentially disruptive technologies for the 21st century: synthetic biology, metaverse technologies, robotics, fabbing, quantum computing, intelligence augmentation, personalized medicine, artificial intelligence, molecular nanotechnology, affordable space launch and anti-aging therapies. A multidisciplinary introduction to thinking critically about the ramifications of accelerating technological change, the presentation is one of several open source projects available on her personal website.

The following transcript from Melanie Swan’s Artificial Intelligence and Society presentation “The Future of Technology” has been edited and approved by the speaker. Audio is also available.

The Future of Technology

future_of_technology_1.png

I’m excited to begin this talk about the future of technology and the Singularity with you.

future_of_technology_2.png

Four points in summary. First, when we think about growth and change we usually use linear and exponential models, but the biggest changes are discontinuous. History is a chain of discontinuities. Second, technology is no longer a specific domain: it has become pervasive in all areas of life. Third, the computational capability needed for advanced technology will continue to be realized in hardware, but has some challenges in software. Fourth, in the next fifty years we will continue to see linear, exponential and discontinuous growth, and probably at least one revolution.

future_of_technology_3.png

Looking more specifically at the paradigms of growth and change, first is linear: things that change a little bit each period. Population, GDP, lifespan… this is linear. Second is exponential: things that can double quickly. Bacteria in a petri dish, water lilies covering a pond, technology and communications, CPU speed, bandwidth, memory, processors, social websites, YouTube, Facebook application installs… this is exponential. Third is discontinuous. This is when things are going along… and then change dramatically. The car, the plane, radio, wars, radar, nuclear weapons, satellites, computers, the internet and globalization were all discontinuities. Things were completely different afterward. The surprise is not the technology itself, but the impact it has: all of these items were predictable ahead of time, but not the impact they had. This is a lot like the way aspects of biotechnology, nanotechnology and artificial intelligence are out in society now but have not yet triggered a substantial impact.

Discontinuities by their nature are nearly impossible to predict, but there are clues. One can assess the possibility for rapid transition time or doubling capability. One can look at cascading technology advances in related areas. For example, advances in disc drive and battery technology allowed the realization of the MP3 player. Advances in natural language understanding could quickly trigger a series of innovations in artificial intelligence.

future_of_technology_4.png

So where will future discontinuities be coming from? What might be the next internet? This is an amazing time right now. Here are eleven different technologies which could have a revolutionary impact: fabbing, synthetic biology, metaverse technologies, robotics, quantum computing, intelligence augmentation, personalized medicine, artificial intelligence, molecular nanotechnology, affordable space launch, and anti-aging therapies. Perhaps even more importantly, whatever happens next will shift how subsequent innovations are perceived and realized. For example, immersive virtual reality may change the requirements for molecular nanotechnology. Biotechnology innovations may change how artificial general intelligence is developed.

future_of_technology_5.png

It is necessary to understand computation as it underlies the realization of most other technological advances. This is the usual Ray Kurzweil exponential growth technology curve, looking from 1900 to present at the different devices for computing their calculations per second per thousand dollars. A lot of growth has come from the integrated circuit, but that is presumably not sustainable forever. What could be next?

Several ideas have been proposed and being worked on in the lab. For example: new materials that could extend traditional semiconductor construction, 3D circuits to expand dimensionality and capacity, quantum computing, which could work in specific situations, molecular electronics as a finer fabrication of circuits, optical computing using light instead of electrical signals, and DNA computing, massively parallel techniques and other structures from biology.

future_of_technology_6.png

Zooming in more closely from the last slide, this chart shows major processor announcements from 1970 to the present, measured by transistors per chip. All recent announcements are still firmly on the Moore’s Law curve, doubling in capacity each 18 to 24 months.

future_of_technology_7.png

Here is how a chip works. The smallest element is the transistor, which basically controls the passage of electrical current as it flows from the source to the drain, controlled by the gate. When it is open, it is a one. When it is closed, it is a zero. This is how ones and zeros are generated in a computer. When a document is opened, thousands of zero and one commands are issued to the computer to make it happen.

The standard transistor on the left had worked fine for several cycles of the technology, but in order to get down to the 45 nanometer chip, Intel had to change in more dramatic ways than in the last 40 years. It did so with new materials, using a different insulator made of hafnium, a high-k material, and also a metal gate to try to control the energy leakage that occurs in the chip.

future_of_technology_8.png

There is an industry roadmap for the next nodes. The 32 nanometer chip is forecast to arrive in 2009, with 1.9 million transistors. The roadmap will then move to the 22, 16, and 11 nanometer nodes. At 10 nanometers molecular computing will be needed–that is, the specific placement of atoms to create the circuits due to the small size. Roadmaps are a useful technique used in technology development. This is where the participants in an industry come together and agree on what the big, outstanding problems are, and what some possible solutions and timeframes for them might be. Roadmapping is used in semiconductors, storage, nanotechnology, and metaverse technologies. There should be one for artificial intelligence.

future_of_technology_9.png

Software is less tangible and harder to measure. Some people estimate software progress to be doubling each six-to-ten years, versus every one-to-two years like hardware. Wirth’s Law suggests software gets slower faster than hardware gets faster. There have been failures in the largest and most complex software projects attempted, notably updates to the CIA and FAA systems in the last decade. Lady Ada Lovelace is credited as being the first programmer. She constructed an algorithm in 1842. The industry has grown since then and is expected to reach 19 million programmers in 2010, which is not many people compared to other industries.

Some improvements could be a greater extension of open source ecologies and software, more interoperability testing (such as the company Spikesource offers), more standards and reusable modules (there is a lot of recoding that goes on), extending Web 2.0 software into the enterprise, and finally software to write software. People are not very good at writing software.

Applying computation to the long-term future of intelligence, there are two main contenders: the computer and the brain. The fastest supercomputer is IBM’s Blue Gene/L, which is located at the Lawrence Livermore Lab. It can process over 596 trillion instructions per second and has a 74 terabyte memory. Humans, on the other hand, are estimated to process 20,000 trillion instructions per second on average and have a thousand terabyte memory.

So the biggest supercomputer is still only 3%, or a thirtieth of the size, of a human brain. However, at current growth rates, raw processing power would be equal in 2010. Furthermore, a human does not use 100% of the brain to do any one task, so it is estimated that only a hundredth or a thousandth of human capability would be necessary to model intelligence. So the hardware exists, and it is a software problem.

future_of_technology_10.png

We have unlimited knowledge of machines because we built them. We have no idea how to build a human other than the traditional way. Machines double in capacity every year or two years. Humans, on the other hand, take 10,000 years for evolutionary adaptations. Machines have linear Von Neumann architectures, while humans are massively parallel. Humans understand fuzzy logic; for example, we are still defining what a “planet” is. But machines require rigid specificity from the outset. Machines are still only good at specific tasks like chess, checkers, and seismic data analysis, whereas humans are general purpose problem solvers. However, machines are easy to back up, as compared with the more delicate human nucleotide chassis.

In the future, there will probably be an overlap of the two. Humans may modify biological properties, bring electronics on board, and choose more capacious hardware for their mindfile software. Machines will start to become more proficient in general purpose problem solving, and perhaps choose embodied forms for themselves.

What is the status of intelligent machines? We have been working on this problem for over 50 years and the central challenge is still the same–to create something that can perceive its environment, acquire and represent knowledge, and act. There are many approaches. Several of them are a trade-off between the top-down method and the bottom-up method. The top-down method is specifying information efficiently in concepts, but it falters in new situations. The bottom-up method is specifying all possible situations but becomes computationally intractable.

There are other methods such as evolutionary algorithms. This is generating many possible solutions to a problem and evaluating which are the most fit. There are learning algorithms, such as those used by Google Translate and Google Spelling Correction, where the program does not have to be particularly clever, just search a large corpus of data quickly. There are mechanistic brute force methods, like the IBM Blue Brain supercomputer simulation of one cortical column of a rat. There are also many other approaches and hybrids of these methods.

Current funding for narrow AI, specific problem solving, comes mainly from DARPA and corporate research budgets. Strong AI, for general purpose problem solving, is the focus of many innovative startups, and there is quite a debate on issues such as how to create moral or Friendly AI.

In the near term, applications are forecast to evolve around pattern recognition: next generation speech recognition systems, and eventually a LUI, a language user interface for talking to computers. Other applications are improved facial recognition, as well as trying to distinguish between a person holding a gift and a crowbar. There may also be intelligent transportation. Truly smart cars that can accept radar input and take action, as well as smart buildings which respond to changes in the environment.

future_of_technology_11.png

The format of intelligent machines is myriad. It can be robotic, like the LawnBott, PackBot, Kismet, and DARPA Grand Challenge winners. It can be distributed as a sensor network across a corporate or university campus with centrally-located processing. It can be virtual, such as non-playable characters (NPCs) in videogames and chatbots in virtual worlds. Or, it can be just digital, not physically embodied at all.

future_of_technology_12.png

Nanotechnology is the convergence of all science. Now, being are able to see at the molecular level has opened up a lot of new areas. A molecular nanotechnology revolution could have a larger impact than the industrial revolution. Molecular nanotechnology is not just working at the scale of nanometers, as in semiconductors and biology, but rather the specific placing of atoms in three dimensions in precise ways to build integral structures from the bottom up.

The scale is very small. A human hair is 80,000 nanometers. The limit of human vision is 10,000 nanometers. A virus is fifty nanometers, DNA is two nanometers wide, and an atom is a tenth of a nanometer. Working at this scale is very difficult. Imagine a plate of peas with honey on them on a kitchen table and you are upstairs with a fishing pole trying to poke through a two-inch hole in the ceiling and move the peas on the plate into a 3D structure. So far, microscopes are one of the best tools for viewing and moving atoms.

This diagram is of an STM, a scanning tunneling microscope, which works by tracing a needle over a sample, issuing electrical charges to determine what types of atoms and properties are below it, and then creating a visual image. There are other microscopy tools used for manipulating atoms, but what is really needed for mass scale manipulation and construction are molecular mills and molecular motors as machine components. Ultimately, we would like to do a lot of things, including have a home appliance, a molecular synthesizer as shown here on a counter-top, supplied by electricity, water and element canisters that would make items on-demand: food, clothing from personally designed items or from designs found on the web.

Here is what the home appliance molecular synthesizer looks like right now. The fabber. The do-it-yourself movement has drawn lots of attention and interest. MIT has eight fab labs in developing markets worldwide, where they make physical objects from open source designs found on the internet.

future_of_technology_13.png

There are also other local community fabs like the TechShop in Menlo Park. Personal 3D printing is growing, such as the RepRap model here on the far right, where plastic cords are melted into the shape of objects, or the more sophisticated Cornell Fab@Home fabber, which works like an ink jet printer with one or two syringes, using rubber silicon or other materials to print custom objects including chocolate bars. The Fab@Home fabber can be constructed from the detailed specs on Cornell’s website, or bought pre-assembled for $3600. Another example is Evil Mad Scientist Labs, who prints in sugar. Personal manufacturing platforms are also developing. Ponoko is a design sharing community and Fabjectory prints physical objects from digital ones, including avatars and Google SketchUp models.

future_of_technology_14.png

Biotechnology has also experienced a lot of progress, especially now that it is seen as an information science. The Human Genome Project was completed in 2003 with the three billion nucleotide base pairs that comprise the human genome being sequenced. The human genome takes up about three gigs of storage on a hard drive.

The human genome has 20,000 to 25,000 genes. Once the genome is read, or “sequenced,” the next step is working with it, which entails making millions of copies to try different things. The copying, or “synthesizing,” can be done in a couple of different ways: first is enzymatically in the lab. Second is artificially, using machines like this DNA synthesizer, which is causing a small revolution in synthetic biology, such as the big announcement last week from Craig Venter‘s lab regarding synthesizing the full genome of a bacterium. The key metrics, DNA sequencing and DNA synthesizing, are represented in this chart in Carlson curves, which are the analog to Moore’s Law, and are growing faster than Moore’s Law. (Moore’s Law is the longer line on the left.)

Genetically, humans are 99.9% the same. The differences are called SNPs. As of November 2007, thousand-dollar personal genome kits can be purchased from 23andme and deCodeme, which will search not just for one SNP, but for up to a million known SNPs and whether an individual is at higher risk for 18 diseases such as cancer, diabetes and Parkinson’s. An interesting question is, Are we are ready for this? How actionable is the information? Is it ethical to discuss this with one’s family first?

In addition to genomics, there are 200 other “-omics” sciences, covering every area of biology from the genotype to the phenotype. Some are much more complex. Proteomics, for example, the study of proteins, looks at how the 25,000 genes in the body code for 500,000 different proteins. Proteins change cell-to-cell as well as biochemically over time, as opposed to the DNA, which mainly stays static.

Thinking longer-term, once all biological processes can be understood and managed, there is no reason why a human could not live indefinitely. Whether one should is a different question.

future_of_technology_15.png

Aubrey de Grey is the leader in thinking about aging as a pathology that should be cured. He is a British biomedical gerontologist who was at Cambridge University for a number of years and now heads the Methuselah Foundation. His SENS program, Strategies for Engineered Negligible Senescence, is outlined in his new book Ending Aging, and he has distilled aging into seven primary problems: mutations in the cell nucleus and mitochondria, junk that builds up inside cells and outside cells, cell loss and cell death, and extracellular crosslinks, which is when cells stick together. He has also identified some solutions to these challenges. First is suppressing the mutations that occur. Second is bioremediation, sending in microbes that essentially eat the junk in the cells and between the cells, and finally strengthening the elasticity of the cells.

Without these or other advances, human lifespan is on a slow linear growth path, with less improvement expected in the next fifty years than in the last fifty years. One’s “real age” can be calculated based on lifestyle and background from these links.

future_of_technology_16.png

Moving from the physical to the virtual, the tremendous increase in data, bandwidth and graphics processing have given rise to an insatiable demand for metaverse technologies: streaming video, data visualization, simulation and 3D data display. The biggest category of metaverse technologies is reality capture, globally through geo-spacialization technologies, and personally through lifelogging. In fact, geoscience is the third fastest category of job growth this decade.

The next area is augmented reality, like this naval display in the middle, overlaying information onto physical reality. Some people want to see mapping information. Other people want to see a giant squid sitting on the Transamerica Building… you can do whatever you like.

There is blended reality, like this video conference on the far right, where people are simultaneously in a physical and a virtual environment. There is alternate reality, which is using physical reality as a platform for something else, usually a game. For example, there is an alternate reality game coming out soon based on Google’s Android protocol called WiFi Army, which is a mix of first-person shooter and spy game, all realized on GPS-enabled mobile devices. Immersive virtual worlds like Second Life are the next generation of the internet, its natural evolution as a communications, commerce and information platform.

Some of the most interesting applications of virtual worlds are using them to create intelligent entities and interact with data. IBM’s virtual network operations center, for example, streams in information about how its data centers are doing worldwide so they can tell at a glance what the situation is. There are other real-time data applications, such as air traffic (here LAX is pictured) and 3D stock charts. The next generation of virtual reality may include things like mapping facial expressions onto avatars and involve other senses: smell, taste and touch. For example, Wild Divine is a videogame which lets players move up levels if they are relaxed enough as measured by a biofeedback device clipped onto the hand. This is a picture of canisters in an aromatic display that NTT is testing.

future_of_technology_17.png

Affordable space launch is important for settling the solar system, and also for science advances, such as investigating the possibility of space-based solar power. Right now it costs $20,000 to launch a pound of material to orbit, and five times as much, $100,000, to launch a pound of material to the moon. It needs to be closer to $500 per pound for launch to orbit for widespread feasibility. Governments are working on this through their regular missions, including new space-faring countries China and India.

Commercial spaceport development is also underway. Virgin Galactic will be launching its suborbital space flights at Spaceport America in New Mexico next year. The company unveiled their new spaceship design last week. Commercial players are also helping to reduce the cost of space launch, notably SpaceX, funded by PayPal co-founder Elon Musk, who estimates he can ultimately decrease the cost of space launch by a factor of ten. His Falcon rocket on the far right here is scheduled for five launches in 2008.

Another concept which is technically feasible is the space elevator, which could lift material to orbit for $100 per pound. A climber would ascend a 100,000 kilometer tether anchored to the Earth on one end and to a counterweight in space on the other end. The current status of this project is having annual games for climbers and carbon nanotube tethers with a $2 million prize purse. In September 2008, climbers must ascend a full kilometer at a rate of two to five meters per second.

Prizes continue to be a stimulatory device for technology development, particularly in aerospace, both publicly through NASA’s Centennial Challenges, as well as privately through the X Prize Foundation. The $30 million Google Lunar X Prize was announced in September 2007 for the first team to land on the moon, rove for 500 meters, and send back mooncasts of the experience.

future_of_technology_18.png

 

There are a variety of potential revolutionary technologies: fabbing, synthetic biology, metaverse technologies, robotics, quantum computing, intelligence augmentation, personalized medicine, artificial intelligence, molecular nanotechnology, affordable space launch and anti-aging therapies. Whatever comes next will influence how subsequent innovations are perceived and realized.

future_of_technology_19.png

In summary, we usually think about growth in the form of linear and exponential models, but the biggest impacts come from discontinuous change, and history is a chain of discontinuities. Second, technology is no longer domain specific–it is pervasive. Third, the computational capability needed to realize technological advances looks to be able to continue in the hardware realm, but sees challenges in the software area. Fourth, in the next fifty years we will continue to see linear, exponential and discontinuous growth, and probably at least one revolutionary change.

This presentation and all my work is open source and available on my website. Thank you.

future_of_technology_20.png

swan_bio.png

Trackback:

Fabbaloo, “Fabbing as a Futuristic Technology?

One thought on “The Future of Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>