Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


Atomistic Small Bearing: Dynamics Performed in NanoEngineer-1, Visualized in Blender

From Machine Phase. This is a movie of the atomistic bearing described by Eric Drexler in Nanosystems. Remember to read Drexler's article or watch this video to understand, "the rotation-induced speed of the shaft surface is substantially lower than the (apparent) vibrational speeds of the atoms". The thermal vibrations in the bearing actually take place much faster than the shaft motion. What you see in the video is only maybe 1/1000 of the actual thermal vibration motions. Because these sorts of videos have a limited frame rate, we get a "strobe light effect" where we only selectively see the vibration. If the video were portraying the thermal vibration on a timescale where you could actually see each part of the action, then the actual shaft surface would be moving at a glacial pace. The upshot of all of that is that the friction and heating in this device would not be nearly as high as it appears at a casual glance.

In another post, Tom Moore points out new software by Miron Cuperman which partially automates the process of using csv data from NanoEngineer-1 to render animations in Blender.


Transhumanism Has Already Won

It's 2010, and transhumanism has already won. Billions of people around the world would love to upgrade their bodies, extend their youth, and amplify their powers of perception, thought, and action with the assistance of safe and tested technologies. The urge to be something more, to go beyond, is the norm rather than the exception.

At their base, the world's major two largest religions -- Christianity and Islam -- are transhumanistic. After all, they promise transcension from death and the concerns of the flesh, and being upgraded to that archetypical transhuman -- the Angel. The angel will probably be our preliminary model as we seek to expand our capacities and enjoyment of the world using technological self-modification. Then, even angels will get bored of being angels, and expand outwards in a million new directions, resulting in an explosion of species never before seen -- exceeding in magnitude and variation even the Cambrian Explosion of 530 million years ago.

Humanity, as it stands today, is a seed, a bridge. We will plant flowers and trees across the universe. All we have to do is survive our embryonic stage, stay in control of our own destiny, and expand outwards in every direction at the speed of light. Ray Kurzweil makes this point in The Singularity is Near, a book that was #1 in the Science & Technology section on Amazon and on the NYT bestsellers list for a reason.

The mainstream has embraced transhumanism. A movie about using a brain-computer interface to become what is essentially a transhuman being, Avatar, is the highest-grossing box office hit of all time, pulling in $2.7 billion. This movie was made with hard-core science fiction enthusiasts in mind. About them, James Cameron said, "If I can just get 'em in the damn theater, the film will act on them in the way it's supposed to, in terms of taking them on an amazing journey and giving them this rich emotional experience." A solid SL2 film, becoming the world's #1 film of all time? It would be hard for the world to give transhumanism a firmer endorsement than that.

Everything is Not Alright

I am tremendously sympathetic to transhumanism's critics and detractors, more so than most transhumanists I have met. Many transhumanists don't seem to understand that when you step outside of the confines of 3.5 billion years of natural evolution in a few short decades of intense technological progress, there are risks. Risks like new viruses and new weapons, to say the least. Even today, global security is entirely dependent on a few very knowledgeable scientists keeping their mouths shut. They start talking to the wrong people, and suddenly cities aren't such safe places to live anymore.

People are basically nice when they're well-fed, and damn evil when they're hungry. Deprive the world's cities of the millions of truckloads of fresh vegetables and meat that arrive every day, and suddenly things will get all nasty. The technologies that transhumanists talk about messing with -- biotechnology, nanotechnology, artificial intelligence -- will force societies to radically restructure or die. I've talked to dozens of selfish transhumanists whose response to this is basically, "well, too bad for them!" No. Too bad for you, because they'll gladly drag you down with them.

Even technologies readily available today, but rarely used -- such as the direct electrical stimulation of the pain and pleasure centers of the human brain -- could become fearsome new plagues on humanity if in the hands of the wrong political or religious fanatics. The Western world today has a sort of fantasy of invulnerability, like a teenager taking his dad's NSX for a joyride. Americans, especially, are high and drunk on our country's prominence in the world. What could possibly go wrong? Radical Islam hates us and all they could achieve is bringing down the World Trade Center.

Imagine a hundred or so tribes in an area covering many hundreds of square miles, quarreling with sticks and stones for hundreds if not thousands of years. Suddenly, they get rifles. In many places around the world, this has already resulted in genocide. That situation is what will happen to humanity as a whole throughout the next fifty years. The Western world is so impressed with its own accomplishments over the past centuries that we don't realize that there is much, much more to come.

Like it or not, the bedrock of any society rests on security. Legal and financial power are trivialities in comparison to the military power and security that makes the legal and economic machinery possible. History is strewn with thousands of examples where legal and financial "realities" collapsed like a cloud of dust when the security fundamentals were threatened. There's nothing that will make people stay inside like a bombing or pandemic.

They Like Us?

Mainstream culture around the world has already embraced transhumanism and transhumanist ideals. The question is not whether humanity will move towards a transhumanist future (it will), but how that power is channeled. It's not hard to convince people to become stronger and healthier if it truly is within their grasp. What we need to worry about is massive power in the hands of individuals with selfish or truly alien and abstract morals.

Good and evil are ideas. Any goal system is ultimately arbitrary. As long as someone can protect themselves from injury or attack, they can do practically anything. This gives us unlimited freedom, but also unlimited peril. Given the ability to modify their own motivations and emotions, selfish people will have the option of becoming even more selfish. Conversely, the altruistic might amplify their compassion.

The Singularity Institute's strategy for dealing with this challenge is the creation of a recursively self-improving moral agent -- safe or "friendly" Artificial Intelligence. This ambition might turn out to be too much -- it may be that programming a computer with the subtleties of human-friendly morality is too great a challenge. But, I think we should try. We should try because the first being to cross the line into true transhumanity will probably be an Artificial Intelligence, and we might as well do something to ensure that it is friendly.

Now, it may be that enhanced humans cross the line into transhumanity first. If you're thinking about that route, consider what it means. Extensive animal testing and risky surgery. Brain implants, which would likely be necessary to achieve the kind of transhumanity that matters, are essentially carefully tuned rocks that we are inserting into proteinaceous tissue. The human skull is a pretty cramped space -- there is not a lot of room for additional machinery. To really get a lot out of it, you'd need to push the brain aside or use a fluid-filled cavity, which might not play well with the body. It is highly questionable whether we can get an I/O of sufficient bandwidth relying on electronic devices that just sit on top of the surface of the brain. My guess is that you'd need a lot of extremely thin, deep electrodes going to precise neural areas to get the necessary I/O. This may be harder or easier than it looks, but I certainly wouldn't want to be anywhere near the first to try it.

If Artificial Intelligence and molecular nanotechnology are not available to meet humanity's thirst for ascension, people will turn to other routes. Crude surgery and the like. That's what I'm afraid of -- a botched entrance into transhumanity. An entrance where the soldiers, fighters, gangsters, and porn stars lead the way. This is already happening now, and it isn't all good. When magnified aggression and machismo lead the way into the future, the future becomes uncertain. We've seen this story many times before.

Magnify the Good in Us

To survive the future in one piece, humanity has to take those qualities that are the best in us -- love, compassion, and altruism -- and give them as much muscle as we can. A distributed approach will not work, because historically a few agents grab power and use it as they see fit. Even in democratic societies, this equation isn't much different, it's just that the few that get power are those with the most votes. Instead of denying the inevitable concentration of power, we have to do what we can to ensure that that power is used wisely.

Maybe it's impossible to keep checks on the powerful. If so, we are still at their mercy. Nudging them to do the right thing is better than having no influence whatsoever. At some point during the next century, the most powerful being will be a transhuman. It will be sculpted by whatever process eventually ended up succeeding in producing a true transhuman. We had better hope that process is a safe and sane one.

When people write an article about a problem, it's usually because they have a ready-made answer they want to sell you. But sometimes the universe just gives us a problem and it has no special obligation to give us an answer. Transhumanity is like that. Whatever answer we come up with may be a little messy, but we have to come up with something, because otherwise the future will play out according to considerations besides global security and harmony. Power asymmetry is not an optional part of the future -- it is a mandatory one. There will be entities way more powerful than human. Where will they be born? How will they be made? These questions are not entirely hypothetical -- the seeds of their creation are among us now. We have to decide how we want to give birth to the next influx of intelligent species on Earth. The birth of transhumanity will mean the death of humanity, if we are not careful.


Nine Reasons I’m Interested in Survivalism

I've become more interested in survivalism over the past few months, for a number of reasons.

1) Survivalism describes a "back to the basics" approach to survival and living that helps strip away (or at least make optional) the consumerism and other trivialities that tend to preoccupy the minds of modern city dwellers, which is refreshing. It's also intellectually fulfilling because it's a vast domain of knowledge with practical application. Smart people also tend to see new solutions to problems that many others do not see, so in an area where it's easy to get up to speed and start having novel ideas, they benefit from the satisfaction of developing novel ideas that few if any people have thought of before and which can help others.

2) In today's uncertain times, survivalism is especially appealing to generations growing up in periods of economic and geopolitical turmoil. Survivalism doesn't have to be an all-out lifestyle change -- even something as simple as growing food in your own backyard to supplement purchases from the supermarket can lead to better nutrition, less expense, better tasting food, and the satisfaction of producing something with your own two hands.

3) Before you can run, you have to learn to walk. Before you can live to 100, you have to live to 50. Before you live to 200, you have to live to 100, and so on. Too much discussion of life extension seems to focus on supplements and exercise (not to say these aren't important, just that these points are practically common knowledge in many parts of the US and Europe), and not on the less-trendy-among-the-SWPL-crowd topics such as pursuing scientifically legitimate anti-aging technologies (though some transhumanists are doing a good job on this, and the mainstream is quickly following), medical knowledge (if you injure yourself and can't easily get to a hospital, what do you do?), and self-defense (if you are attacked by a mugger in the dark, how should you react?) With our handy-dandy friend, Bayesianism, we make a good shot at precisely quantifying risk and allocate our attention accordingly.

4) Of course, survivalist knowledge and skills can vastly magnify your chances of surviving a major disaster up to and including nuclear war or worse, while also increasing your chances of being able to save dozens or even hundreds of lives of others in such a scenario. . Even straightforward knowledge like the fact that millions of gallons of water could be recovered from underground water pipes in undulating terrain even in a grid-down scenario by opening fire hydrants at the lowest available altitude could save hundreds of lives in a disaster. During the exodus from Hurricane Katrina, thousands of thirsty people walked right by fire hydrants. There are many thousands of similar examples of people suffering due to lacking simple knowledge as soon as they are transplanted outside of their zone of familiarity.

5) Acquiring survivalist knowledge can help us take action today to prepare our cities, countries, and planet for resilience under any disaster. If every person in the US spent $50 today, they could acquire enough food and water to keep them in good health for a month even if distribution were halted. Even such a basic measure would increase societal resilience far out of proportion to its cost. Not only resilience in the face of catastrophic disasters like EMP attack or pandemic flu, but even simple downturns like a recession. For someone who lives paycheck to paycheck, which is most Americans, setting aside a small reserve for a rainy day can make a huge difference. For practically all of human civilization up until very recently, this was considered common sense, but with the great economic boom of the last 60 or so years, many people have become complacent. This fits in with the logic of someone who wants to live a very long time -- take history seriously and you can get the next best thing to having hundreds of years worth of wisdom. This view of life stands in stark contrast to the thrill-seeking risk-taker who regularly speeds, drives buzzed, and generally regards his life as disposable -- something to be used up and then thrown away.

6) Another reason for valuing survivalism, and this is somewhat related to #1, is that it builds a greater understanding for how people live in poor countries. Surviving with basic tools is daily reality for billions of people. The poor make up most of the population of the world. To see how high technology can really help people and change the world, don't just look at the latest Apple products. ("Think Different", y'know?) Everybody and their brother is excited by the sexiness and sleekness of expensive, new gadgets, and some of these toys may indeed be the wave of the future, but a lot of it seems to merely be technophilic masturbation. What is more exciting to me are technological upgrades to long-standing human needs, like acquiring water or generating basic alternative power. I am far more interested in a machine that takes water directly from air than the latest $4,299 monitor. When I say "artificial general intelligence could have a massive beneficial impact on the world", I'm generally thinking of its potential contributions to the former type of technology, not the latter.

7) Survivalism embodies back-to-the-earth type attitude that fosters good health (by helping us see the value of exercise) and better treatment of the environment.

8) At some point over the next 10-20 years, probably prior to the Singularity, I foresee the possibility of increasing decentralization of technological society due to wonderful inventions like better fab labs, alternative energy systems, personal security systems, and much more. The more technology improves, the less it makes us dependent on sometimes hectic agglomerations of technology like cities. Already, technology is making it easier for the middle-class to live well while camping, backpacking, or visiting a country home, so these activities do not have to be synonymous with low-tech. With a hand-crank radio and personal solar cell, I can charge my cell phone or even a laptop or Game Boy in the middle of nowhere. This opens up the possibility of entirely new communities -- savvy, educated city folk capable of living off the grid in picturesque areas without going back to the Stone Age or dragging along a massive, gas-guzzling RV. I think that FM-2030, an early transhumanist I admire, would really appreciate this. For someone like me, who loves both high technology and the ruggedness of nature (and is interested in seeing their eventual fusion), this is like having my cake and eating it too.

9) Somewhat more obscurely, learning more about the totality of the "tech tree" of modern society, rather than just a few pieces of it, could help me better argue the ways in which a "human-equivalent AI" could rapidly bootstrap its own infrastructure and practically become an autonomous civilization of its own, with great potential for helping or hurting humanity, including humans that live in the middle of isolated forests. Technology-soaked urban humans find it difficult to fathom the idea of a human-equivalent AI surviving and thriving outside of a regulated lab or mainframe environment, but the knowledge I've acquired over the last few years makes it seem more plausible -- and I'm not even an AI, which would have a much greater-than-human memory and focus even if its general intelligence were "merely human-equivalent".

Filed under: articles 59 Comments

Humanity+ UK 2010: “Unprecedented Gathering in London”

Here is the press release from April 15th helping to build buzz for the Humanity+ UK 2010 conference in London, which begins tomorrow and runs over the weekend. There will also be a live feed of the conference for those who can't be there in person.

For immediate release:
Unprecedented gathering of futurist and transhumanist thinkers in London

Humanity+ movement comes of age
Record turnout expected for Humanity+ UK2010 conference on 24th April

The UK chapter of Humanity+, an organisation dedicated to promoting understanding, interest and participation in fields of emerging innovation that can radically benefit the human condition, announced today that registrations are on track for record attendance at the Humanity+ UK2010 conference taking place in Conway Hall, Holborn, London, on April 24th.

"Approaching 200 attendees are expected to take part in a full day of thought-provoking lectures, discussions, Q&A, and breakouts, led by a line-up of world class futurist speakers," said David Wood, H+UK meetings secretary. "Participants have registered from as far afield as Poland, Sweden, Croatia, Portugal, Germany, Belgium, Holland, Ireland, and the USA. The Humanity+ movement, previously known as the World Transhumanist Association, is coming of age."

"Transhumanism is both a reason-based philosophy and a cultural movement that affirms the possibility and desirability of fundamentally improving the human condition by means of science and technology," said Max More, founder in 1988 of the Extropy Institute think tank ideas market for the future of social change. "Transhumanists seek the continuation and acceleration of the evolution of intelligent life beyond its current human form and limitations by means of science and technology, guided by life-promoting principles and values." Max More introduced the term "transhumanism" in its modern sense in his 1990 essay "Transhumanism: Toward a Futurist Philosophy".

"At Humanity+ UK2010, I plan to talk about suffering and how to get rid of it," commented David Pearce, co-founder in 1998 of the World Transhumanist Association (WTA). "I predict we will abolish suffering throughout the living world. Our descendants will be animated by gradients of genetically reprogrammed well-being that are orders of magnitude richer than today's peak experiences."

"There's growing worldwide interest to debate and understand the dramatic human implications of emerging technologies such as nanotechnology, synthetic biology, and artificial general intelligence," commented David Orban, recently elected as Chair of the international Humanity+ organisation. "I'm eagerly looking forward to Humanity+ UK2010, where I will be speaking about the Internet of Things, and the Singularity University." David Orban is also Advisor to Singularity University.

In addition to Max More, David Orban, and David Pearce, other keynote speakers at the event include WTA co-founder Professor Nick Bostrom of Oxford University, pioneering anti-aging human rejuvenation researcher Aubrey de Grey, human enhancement theorist and media designer Natasha Vita-More, interdisciplinary researcher and science fiction author Rachel Armstrong, cognitive scientist Amon Twyman from University College London, and Anders Sandberg, who is James Martin Research Fellow at the Future of Humanity Institute at Oxford University. Full details of the event are available at

Note: Journalists interested to interview any of the keynote speakers, or who wish to receive a free press pass to the event, should contact the H+UK organisation via


Humanity+ @ Harvard — The Rise Of The Citizen Scientist

Humanity+, the worldwide association of transhumanists, is putting on a conference at Harvard on June 12-13. Tickets are available now. The theme is "rise of the citizen scientist". Here is all the blurb:

The summer 2010 “Humanity+ @ Harvard -- The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University's prestigious Science Hall on June 12-13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference. Full information is at

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is "The Rise Of The Citizen Scientist", as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

"Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and "peak everything". Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories."

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

* David Orban, Chairman of Humanity+: "Intelligence Augmentation,
Decision Power, And The Emerging Data Sphere"
* Heather Knight, CTO of Humanity+: "Why Robots Need to Spend More
Time in the Limelight"
* Andrew Hessel, Co-Chair at Singularity University: "Altered
Carbon: The Emerging Biological Diamond Age"
* M. A. Greenstein, Art Center College of Design: "Sparking our
Neural Humanity with Neurotech!"
* Michael Smolens, CEO of dotSUB: "Removing language as a barrier to
cross cultural communication"

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard -- The Rise Of The Citizen Scientist
June 12-13, Harvard University
Cambridge, MA

When you register, please use the URL for
tracking purposes.


Dispelling Stupid Myths About Nuclear War

In response to discussion in the comments section on my recent post on nuclear war, Dave said:

Really, I mean, honestly, no one is surviving a nuclear war.

This is absolute nonsense. To quote the very first paragraph of Nuclear War Survival Skills, a civil defense manual based on in-depth research at the Oak Ridge National Laboratory:

An all-out nuclear war between Russia and the United States would be the worst catastrophe in history, a tragedy so huge it is difficult to comprehend. Even so, it would be far from the end of human life on earth. The dangers from nuclear weapons have been distorted and exaggerated, for varied reasons. These exaggerations have become demoralizing myths, believed by millions of Americans.

Here's another good quote:

Only a very small fraction of Hiroshima and Nagasaki citizens who survived radiation doses some of which were nearly fatal have suffered serious delayed effects. The reader should realize that to do essential work after a massive nuclear attack, many survivors must be willing to receive much larger radiation doses than are normally permissible. Otherwise, too many workers would stay inside shelter too much of the time, and work that would be vital to national recovery could not be done. For example, if the great majority of truckers were so fearful of receiving even non-incapacitating radiation doses that they would refuse to transport food, additional millions would die from starvation alone.

The whole first chapter of the book is filled with refutations of popular myths about nuclear war. When you know the science, these myths seem extremely stupid. Yet millions of people believe them.

Here is one possible fallout distribution pattern, from FEMA:

Notice that the distribution would go to the east, because the prevailing winds come from the west. That spells good news for people out west. We also notice that there are wide swaths in the map that would just be empty of fallout, including maybe 95% of the area of the western United States.

Continents are big, big places. We may or may not yet have weapons that can threaten life across their entire areas, but probably not. (We may get them soon, though.)

For more information on nuclear war, Notre Dame has an Open Courseware page with lectures from Professor Grant Matthews.

Filed under: nuclear, risks 84 Comments

Interviews with Academics in Robot Ethics

Over at the Moral Machines blog, Colin Allen lists three recent interviews by Gerhard Dabringer on the topic of robot ethics. One of the interviews is with Jurgen Altmann, who I admire greatly for his academic work on preventive arms control. His book Military Nanotechnology is my favorite book on molecular nanotechnology policy, and I hope that its recommendations will be adopted. A small preview is online, but you'll have to shell out $128 if you want a hard copy. Anyway, here are the interviews:

George Bekey: Professor Emeritus of Computer Science, Electrical Engineering and Biomedical Engineering at the University of Southern California and Adjunct Professor of Biomedical Engineering and Special Consultant to the Dean of the College of Engineering at the California Polytechnic State University. He is well known for his book Autonomous Robots (2005) and is Co-author of the study "Autonomous Military Robotics: Risk, Ethics and Design" (2008).

Jurgen Altmann: University of Dortmund, a founding member of the International Committee for Robot Arms Control. Since 2003 he is a deputy speaker of the Committee on Physics and Disarmament of Deutsche Physikalische Gesellschaft (DPG, the society of physicists in Germany) and currently directs the project on “Unmanned Armed Systems - Trends, Dangers and Preventive Arms Control” located at the Chair of Experimentelle Physik III at Technische Universität Dortmund.

John Sullins: Assistant Professor of Philosophy at Sonoma State University. His specializations are philosophy of technology, philosophical issues of artificial intelligence/robotics, cognitive science, philosophy of science, engineering ethics, and computer ethics.

Filed under: risks, robotics 4 Comments

Chalmers: “The argument for a singularity is one that we should take seriously”

Here is a quote from the Chalmers paper that I linked yesterday:

One might think that the singularity would be of great interest to academic philosophers, cognitive scientists, and artificial intelligence researchers. In practice, this has not been the case. Good was an eminent academic, but his article was largely unappreciated at the time. The subsequent discussion of the singularity has largely taken place in nonacademic circles, including Internet forums, popular media and books, and workshops organized by the independent Singularity Institute. Perhaps the highly speculative flavor of the singularity idea has been responsible for academic resistance to the idea.

I think this resistance is a shame, as the singularity idea is clearly an important one. The argument for a singularity is one that we should take seriously. And the questions surrounding the singularity are of enormous practical and philosophical concern.

Practically: If there is a singularity, it will be one of the most important events in the history of the planet. An intelligence explosion has enormous potential benefits: a cure for all known diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet. So if there is even a small chance that there will be a singularity, we would do well to think about what forms it might take and whether there is anything we can do to influence the outcomes in a positive direction.

Good, practical advice for everyone living in the 21st century!

Filed under: singularity 1 Comment

Forbes “Life in 2020” Articles

Forbes has recently published a package of articles on predictions for life in the year 2020, and their social media wing emailed me to publish the links, so here they are! 2010 is a good year to make predictions for 2015 and 2020. If you want to be a futurist in 2015 or 2020, start now with some predictions! Anyway, here is the blurb and links:

You will be healthier. Your technology will be more human. You will fight to keep your job. You will walk to work. There will be nowhere to hide. Your life is about to change.

Transportation in 2020
In 10 years, your commute will be short, cheap and, dare we say, fun.

The Classroom In 2020
The next decade will bring an end to school as we know it.

Your Choice In 2020
How big computing will make every action a transaction.

Your Computer In 2020
Traditional computers are disappearing; human beings themselves are becoming information augmented.

Your Home In 2020
Goodbye, McMansions. In 2020 we'll build for the triple bottom line: people, planet as well as profit.

Your Job In 2020
In 2020 you will fight to keep your job.

Your Diet In 2020
In 2020 you will finally start taking care of yourself.

Your Health In 2020
Passive patients will become empowered participants.

Making Whuffie
Social networks change the way we look at the world and introduce new economic incentives.

Filed under: futurism 63 Comments

David Chalmers on Singularity, Intelligence Exposion

Recently, David Chalmers announced that he was posting a new paper based on his Singularity Summit 2010 talk: "The Singularity: A Philosophical Analysis". In his announcement, Chalmers notes, "I'm still an amateur on these topics and any feedback would be appreciated." You can also watch a video of Chalmers' Summit talk.

Filed under: singularity 1 Comment

Audio of My Foresight 2010 Talk: “Don’t Fear the Singularity, But Be Careful”

Here is the audio of the video file. Thanks to Franz Fuchs for extracting the audio file.

Filed under: friendly ai 1 Comment