Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

2Jun/113

Foresight @ Google: 25th Anniversary & Reunion Weekend

Interested in emerging technologies?
Fascinated by the potential in transformative nanotech?
Come explore the future with...

FORESIGHT@GOOGLE
25th Anniversary Conference Celebration and Reunion Weekend
Google HQ in Mountain View, CA
June 25-26 2011

A rockstar lineup includes keynotes:

• JIM VON EHR - Founder/President of Zyvex,
the world's first successful molecular nanotech company
• BARNEY PELL, PhD - Cofounder/CTO of Moon Express, competing for Google's Lunar X PRIZE

With speakers and panelists including:
• WILLIAM ANDREGG - Founder/CEO of Halcyon Molecular
• MIKE GARNER, PhD - Chair of ITRS Emerging Research Materials
• MIKE NELSON - CTO of NanoInk
• LUKE NOSEK - CoFounder of Paypal, Founders Fund Partner
• PAUL SAFFO, PhD - Wired, NYT-published strategist & forecaster
• SIR FRASER STODDART, PhD - Knighted for creation of molecular "switches" and a new field of nanochemistry
• THOMAS THEIS, PhD - IBM's Director of Physical Sciences

For the full speaker roster, as well as information on our exclusive 25th Anniversary Banquet, see our conference website:

http://www.foresight.org/reunion

Space is limited!

For $50 off, register now with the special discount code just for AF readers: ACCELERATING

I hope to see you all there!

31Jan/1163

My Upcoming Talk in Texas: Anthropomorphism and Moral Realism in Advanced Artificial Intelligence

I was recently informed that my abstract was accepted for presentation at the Society for Philosophy and Technology conference in Denton, TX, this upcoming May 26 - 29. You may have heard of their journal, Techné. Register now for the exciting chance to see me onstage, talking AI and philosophy. If you would volunteer to film me, that would make me even more excited, and valuable to our most noble cause.

Here's the abstract:

Anthropomorphism and Moral Realism in Advanced Artificial Intelligence
Michael Anissimov
Singularity Institute for Artificial Intelligence

Humanity has attributed human-like qualities to simple automatons since the time of the Greeks. This highlights our tendency to anthropomorphize (Yudkowsky 2008). Today, many computer users anthropomorphize software programs. Human psychology is extremely complex, and most of the simplest everyday tasks have yet to be replicated by a computer or robot (Pinker 1997). As robotics and Artificial Intelligence (AI) become a larger and more important part of civilization, we have to ensure that robots are capable of making complex, unsupervised decisions in ways we would broadly consider beneficial or common-sensical. Moral realism, the idea that moral statements can be true or false, may cause developers in AI and robotics to underestimate the effort required to meet this goal. Moral realism is a false, but widely held belief (Greene 2002). A common notion in discussions of advanced AI is that once an AI acquires sufficient intelligence, it will inherently know how to do the right thing morally. This assumption may derail attempts to develop human-friendly goal systems in AI by making such efforts seem unnecessary.

Although rogue AI is a staple of science fiction, many scientists and AI researchers take the risk seriously (Bostrom 2002; Rees 2003; Kurzweil 2005; Bostrom 2006; Omohundro 2008; Yudkowsky 2008). Arguments have been made that superintelligent AI -- an intellect much smarter than the best human brains in practically every field -- could be created as early as the 2030s (Bostrom 1998; Kurzweil 2005). Superintelligent AI could copy itself, potentially accelerate its thinking and action speeds to superhuman levels, and rapidly self-modify to increase its own intelligence and power further (Good 1965; Yudkowsky 2008). A strong argument can be made that superintelligent machines will eventually become a dominant force on Earth. An "intelligence explosion" could result from communities or individual artificial intelligences rapidly self-improving and acquiring resources.

Most AI rebellion in fiction is highly anthropomorphic -- AIs feeling resentment towards their creators. More realistically, advanced AIs might pursue resources as instrumental objectives in pursuit of a wide range of possible goals, so effectively that humans could be deprived of space or matter we need to live (Omohundro 2008). In this manner, human extinction could come about through the indifference of more powerful beings rather than outright malevolence. A central question is, "how can we design a self-improving AI that remains friendly to humans even if it eventually becomes superintelligent and gains access to its own source code?" This challenge is addressed in a variety of works over the last decade (Yudkowsky 2001; Bostrom 2003; Hall 2007; Wallach 2008) but is still very much an open problem.

A technically detailed answer to the question, "how can we create a human-friendly superintelligence?" is an interdisciplinary task, bringing together philosophy, cognitive science, and computer science. Building a background requires analyzing human motivational structure, including human-universal behaviors (Brown 1991), and uncovering the hidden complexity of human desires and motivations (Pinker 1997) rather than viewing Homo sapiens as a blank slate onto which culture is imprinted (Pinker 2003). Building artificial intelligences by copying human motivational structures may be undesirable because human motivations given capabilities of superintelligence and open-ended self-modification could be dangerous. Such AIs might "wirehead" themselves by stimulating their own pleasure centers at the expense of constructive or beneficent activities in the external world. Experimental evidence of the consequences of direct stimulation of the human pleasure center is very limited, but we have anecdotal evidence in the form of drug addiction.

Since artificial intelligence will eventually exceed human capabilities, it is crucial that the challenge of creating a stable human-friendly motivational structure in AI is solved before the technology reaches a threshold level of sophistication. Even if advanced AI is not created for hundreds of years, many fruitful philosophical questions are raised by the possibility (Chalmers 2010).

References

Bostrom, N. (2002). "Existential Risks: Analyzing Human Extinction Scenarios". Journal of Evolution and Technology, 9(1).

Bostrom, N. (2003). "Ethical Issues in Advanced Artificial Intelligence". Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence.

Bostrom, N. (2006). "How long before superintelligence?". Linguistic and Philosophical Investigations 5 (1): 11–30.

Brown, D. (1991). Human Universals. McGraw Hill.

Chalmers, D. (2010). "The Singularity: a Philosophical Analysis". Presented at the Singularity Summit 2010 in New York.

Good, I. J. (1965). "Speculations Concerning the First Ultraintelligent Machine", Advances in Computers, vol 6, Franz L. Alt and Morris Rubinoff, eds, pp 31-88, Academic Press.

Greene, J. (2002). The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it. Doctoral Dissertation for the Department of Philosophy, Princeton University, June 2002.

Hall, J.S. (2007). Beyond AI: Creating the Conscience of the Machine. Amherst: Prometheus Books.

Omohundro, S. (2008). "The Basic AI Drives". Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited by P. Wang, B. Goertzel, and S. Franklin, February 2008, IOS Press.

Pinker, S. (1997). How the Mind Works. Penguin Books.

Pinker, S. (2003). The Blank Slate: the Modern Denial of Human Nature. Penguin Books.

Rees, M. (2003). Our Final Hour: A Scientist's Warning : how Terror, Error, and Environmental Disaster Threaten Humankind's Future in this Century - on Earth and Beyond. Basic Books.

Wallach, W. & Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press.

Yudkowsky, E. (2001). Creating Friendly AI. Publication of the Singularity Institute for Artificial Intelligence.

Yudkowsky, E. (2008). "Artificial Intelligence as a positive and negative factor in global risk". In N. Bostrom and M. Cirkovic (Eds.), Global Catastrophic Risks (pp. 308-343). Oxford University Press.

8Dec/100

Global Catastrophic Risks in the Spotlight at Society for Risk Analysis Conference

Today I attended the global catastrophic risk sessions at the Society for Risk Analysis annual meeting in Salt Lake City, and was very pleased by the attendance at these two sessions. Two former Presidents of the society attended, and one, Jonathan Weiner, gave a compelling talk that reminded me very much of Eliezer Yudkowsky's "Cognitive biases potentially affecting judgment of global risks". Jonathan called for more attention to global catastrophic risks, including global financial crises, and pointed out specific biases that prevent people from giving due attention to these risks. The whole experience gave me the strong impression that the risk analysis mainstream is very much interested in global catastrophic risks. Congratulations to Seth Baum for spearheading this effort.

Robin Hanson gave a fascinating talk on refuge entry futures. Basically, the idea is that you could potentially judge the probability of catastrophic risks better than the status quo by seeing how many people would be willing to buy tickets to enter secure refuges in case of a disaster or some triggering event.

My talk, which I gave yesterday at a session on nanotechnology risk assessment and perception (everyone besides me was focused primarily on nanoparticles), was titled "Public Scholarship and Global Catastrophic Risk". Nothing new to readers of this blog, the points are all relatively straightforward:

1) showed the catastrophic risks table from Bostrom (2008)
2) gave a few examples
3) global catastrophic risks (GCRs) outclass all other risks in terms of importance
4) books to read: Global Catastrophic Risks, Military Nanotechnology, The Singularity is Near
5) pointed out Bill Joy's influential 2000 article, "Why the future doesn't need us"
6) said that we focus on GRAIN: genetics, robotics, AI, nanotechnology
7) groups working on GCRs: Singularity Institute, Future of Humanity Institute, Presidential Commission for the Study of Bioethical Issues (synthetic biology risk), SENS Foundation (aging is considered a GCR according to Bostrom), Center for Responsible Nanotechnology (CRN), Lifeboat Foundation
8) Quick summary of CRN work, pointed out that more than half of average people view nanotech associated with Drexlerian nanotech (Ann Bostrom gave evidence of this in a talk that came before mine, from a mall intercept study)
9) tried to make it clear to the audience that most risk analysts in nanotechnology today have failed to focus on the important risks, if it weren't for CRN there wouldn't have even been a scientific rebuttal of grey goo
10) most risk analysts probably aren't even clear on why grey goo is implausible, they just dismiss it out of hand without having good reasons or understanding
11) public scholarship: bringing academic work to the public
12) summarized Singularity Institute activities to raise awareness of GCRs: Visiting Fellows Program, Singularity Summit, workshops, blogs, papers, and contributions to edited volumes
13) showed pics of Visiting Fellows Program and Singularity Summit 2010
14) showed San Jose Mercury article on Thiel's Audacious Optimism dinner to illustrate the enthusiasm of some philanthropists for this area
15) summarized our media exposure since I became media director: a lot, including GQ, New York Times, Popular Mechanics, Popular Science, Playboy (Carl Zimmer), etc.
16) interdisciplinary effort: biology, decision theory, computer science, risk analysis, physics, philosophy, nanotechnology
17) suggested some websites to visit, intelligence.org and the like
18) wrapped it up.

The meeting was productive enough that I'll likely attend next year. Thanks to everyone I met for their stimulating conversations.

Filed under: events, risks No Comments
7Dec/106

Silicon Valley Billionaire Backs Futuristic Philanthropy

Here's the article from yesterday's San Jose Mercury News:

Silicon Valley billionaire Peter Thiel worries that people aren't thinking big enough about the future.

So he's convening an unusual philanthropic summit Tuesday night , where he'll introduce other wealthy tech figures to nonprofit groups exploring such futuristic -- some might say "far out" -- ideas as artificial intelligence, the use of "rejuvenation biotechnologies" to extend human life and the creation of free-floating communities on the high seas.

"We're living in a world where people are incredibly biased toward the incremental," said Thiel, explaining that he wants to challenge his peers to pursue more "radical breakthroughs" in their philanthropy, by supporting nonprofit exploration of technological innovations that carry at least the promise of major advances for the human condition.

"Obviously there are a lot of questions about the impact of these things," he added. "If you have radical life extension, that could obviously lead to repercussions for society. But I think that's a problem we want to have."

The 43-year-old financier and philanthropist, who made a fortune as co-founder of PayPal and an early backer of Facebook, will make his pitch to more than 200 well-heeled entrepreneurs and techies during an invitation-only dinner at the Palace of Fine Arts in San Francisco.

I'm missing this event because I'm attending the Society for Risk Analysis annual conference in SLC, where I just gave a talk. I wish the best to all my colleagues attending the event, however. Here's another Thiel quote I liked:

"One of the things that's gone strangely wrong in the United States is that the future is not really being thought about as a major idea anymore," he added.

Simple but true. I wasn't alive in the 50s or 60s so I don't know exactly what it was like, but from what I've read, people cared a lot more about the future. From the 70s onward, the emphasis seems to be more on the past.

22Nov/101

500 Pictures of Singularity Summit 2010 Available

Photo by A. Jolly 2010.

My Flickr account contains almost 500 photos of Singularity Summit 2010, more than you could ever want. I mentioned this before in the Singularity Institute newsletter but not here. A special thanks to our photographers, A. Jolly and Anthony Scatchell. Please get in touch with me if you are interested in volunteering for photography next year.

Steven Mann is so cool!

The videos are currently being edited, they'll be completed over the next few weeks. Sorry for the delay, one of our initial editors backed out of the project. Watch the Vimeo channel for updates. I'll announce it officially on SIAI blog when some go online.

Total attendance at Singularity Summit 2010 was approximately 620.

For any new readers: the Singularity Summit is put on by the Singularity Institute, which I work for. I co-organize Singularity Summit, assisting our President, Michael Vassar. Everyone at the Singularity Institute cooperates to make the Singularity Summit happen. The Singularity Summit MC is Sean McCabe, previously a close assistant to James Randi.

15Oct/1025

Humanity+ @ Caltech to be Held at Beckman Institute in Los Angeles, December 4-5

Here's the website. Humanity+ @ CalTech is hosted by the California Institute of Technology and ab|inventio, the invention factory behind QLess, Whozat, SocialDiligence and MyNew.TV.

The speakers list is a mix of the usual suspects and some new names. The usual suspects include Randal Koene, Suzanne Gildert, Michael Vassar, Max More, Nastasha Vita-More, Bryan Bishop, Patri Friedman, Ben Goertzel, and Gregory Benford. If you were following my tweets from this weekend you'll recall that Benford announced StemCell100(tm) at the Life Extension Conference in Burlingame, which is a product of LifeCode, a spinoff company of Genescient.

The conference is partially being organized by my friend Tom McCabe, who was recently voted on to the Board of Directors of Humanity+. Please let Tom know (his email is at his website) if you want to help sponsor the event!

13Oct/107

Society for Risk Analysis Annual Meeting Presentation

This is just a reminder that I will be presenting at the Society for Risk Analysis annual meeting in Salt Lake City on December 5-8. The meeting is open to anyone interested in risk analysis. Registration is $500. Robin Hanson and Seth Baum will be there as well. My presentation will be part of the "Assessment, Communication and Perception of Nanotechnology" track. The full session list is here. Seth will be chairing the "Methodologies for Global Catastrophic Risk Assessment" track, where Robin will be giving his talk.

Here's my abstract:

T3-F.4 14:30 Public Scholarship For Global Catastrophic Risks. Anissimov M*; Singularity Institute

Abstract: Global catastrophic risks (GCRs) are risks that threaten civilization on a global scale, including nuclear war, ecological collapse, pandemics, and poorly understood risks from emerging technologies such as nanotechnology and artificial intelligence. Public perception of GCRs is important because these risks and responses to them are often driven by public activities or by the public policies of democracies. However, much of the public perception is based on science fiction books and films, which unfortunately often lack scientific accuracy. This presentation describes an effort to improve public perceptions of GCR through public scholarship. Public scholarship is the process of bringing academic and other scholarship into the public sphere, often to inform democratic processes. The effort described here works on all GCRs and focuses on emerging technologies such as biotechnology and nanotechnology. The effort involves innovating use of blogs, social networking sites, and other new media platforms. This effort has already resulted in, among other things, a visible online community of thousands following the science around GCRs, and plans to further move discussion of scholarly GCR literature into the mainstream media. It is believed that public scholarship efforts like these can play important roles in societal responses to GCRs.

Here's Professor Hanson's abstract:

W3-A.3 14:10 Catastrophic Risk Forecasts From Refuge Entry Futures. Hanson RD*; George Mason University

Abstract: Speculative markets have demonstrated powerful abilities to forecast future events, which has inspired a new field of prediction markets to explore such possibilities. Can such power be harnessed to forecast global catastrophic risk? One problem is that such mechanisms offered weaker incentives to forecast distant future events, yet we want forecasts about distant future catastrophes. But this is a generic problem with all ways to forecast the distant future; it is not specific to this mechanism. Bets also have a problem forecasting the end of the world, as no one is left afterward to collect on bets. So to let speculators advise us about world's end, we might have them trade an asset available now that remains valuable as close as possible to an end. Imagine a refuge with a good chance of surviving a wide range of disasters. It might be hidden deep in a mine, stocked with years of food and power, and continuously populated with thirty experts and thirty amateurs. Locked down against pandemics, it is opened every month for supplies and new residents. A refuge ticket gives you the right to use an amateur refuge slot for a given time period. To exercise a ticket, you show up at its entrance at the assigned time. Refuge tickets could be auctioned years in advance, broken into conditional parts, and traded in subsidized markets. For example, one might buy a refuge ticket valid on a certain date only in the event that USA and Russia had just broken off diplomatic relations, or in the event a city somewhere is nuked. The price of such resort tickets would rise with the chance of such events. By trading such tickets conditional on a policy that might mitigate a crisis, such as a treaty, prices could reflect conditional chances of such events.

Filed under: events, risks 7 Comments
12Oct/1015

ASIM Experts Series: Brain-Machine Interfacing: Current Work and Future Directions, by Max Hodak, October 17, 2010

"ASIM" stands for Advancing Substrate Independent Minds, the field previously known as mind uploading, though ASIM can be construed as broader. ASIM is the focus of Carboncopies, a new non-profit founded by Suzanne Gildert (now at D-Wave) and Randal Koene (Halcyon Molecular). Randal and I work at the same company so I get to see him in the lunch room now.

The presentation, to be held in Teleplace this upcoming Sunday (email Giulio Prisco for directions on how to log in) has the following abstract:

Brain-machine interfacing: current work and future directions
Max Hodak - http://younoodle.com/people/max_hodak

Abstract: Fluid, two-way brain-machine interfacing represents one of the greatest challenges of modern bioengineering. It offers the potential to restore movement and speech to the locked-in, and ultimately allow us as humans to expand far beyond the biological limits we're encased in now. But, there's a long road ahead. Today, noninvasive BMIs are largely useless as practical devices and invasive BMIs are critically limited, though progress is being made everyday. Microwire array recording is used all over the world to decode motor intent out of cortex to drive robotic actuators and software controls. Electrical intracortical microstimulation is used to "write" information to the brain, and optogenetic methods promise to make that easier and safer. Monkey models can perform tasks from controlling a walking robot to feeding themselves with a 7-DOF robotic arm. Before we'll be able to make the jump to humans, biocompatibility of electrodes and limited channel counts are significant hurdles that will need to be crossed. These technologies are still in their infancy, but they're a huge opportunity in science for those motivated to help bring them through to maturity.

Max Hodak is a student of Miguel Nicolelis, the well-known BMI engineer.

Filed under: BCI, events 15 Comments
4Jul/103

Wendell Wallach to Give Keynote on AI Morality at WFS Meeting

Wendell Wallach will be giving the keynote talk at the plenary session of the World Future Society Conference in Boston on July 8th. The title of the talk will be, Navigating the Future: Moral Machines, Techno Humans, and the Singularity. Other speakers at WorldFuture 2010: Sustainable Futures, Strategies, and Technologies will be Ray Kurzweil, Dennis Bushnell, and Harvey Cox.

Wallach will also be making a splash in an upcoming issue of Ethics and Information Technology dedicated to "Robot Ethics and Human Ethics." As the Moral Machines blog, Wendell offers the first two paragraphs of his editorial, and some additional information about the issue:

It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At bottom is what we must know about ethics in general to build machines that operate within normative parameters. Fuzzy intuitions will not do where the specifics of engineering and computational clarity are required. So, machine ethicists are forced head on to engage in moral philosophy. Their effort, of course, hangs on a careful analysis of ethical theories, the role of affect in making moral decisions, relationships between agents and patients, and so forth, including the specifics of any concrete case. But there is more here to the human story.

Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.

"Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making"
Wendell Wallach

"Moral Appearances: Emotions, Robots and Human Morality"
Mark Coeckelbergh

"Robot Rights? Toward a Social-Relational Justification of Moral Consideration"
Mark Coekckelbergh

"RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield"
John Sullins

"The Cubical Warrior: The Marionette of Digitized Warfare"
Lamber Royakkers

"Robot Caregivers: Harbingers of Expanded Freedom for All"
Yvette Pearson and Jason Borenstein

"Implications and Consequences of Robots with Biological Brains"
Kevin Warwick

"Designing a Machine for Learning and the Ethics of Robotics: the N-Reasons Platform"
Peter Danielson

Book Reviews of Wallach and Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford, 2009.
Anthony F. Beavers
Vincent Wiegel
Jeff Buechner

Bravo! Wallach provocatively goes after the heart of the moral issue. Moral philosophy needs machine ethics to test its descriptive theories of human morality and morality in general. Philosophy without engineering and the scientific method is fatally limited.

This has an impact on morality that concerns all human beings. For millennia we have understood morality and ethics through introspection, contemplation, and meditation -- but all of these avenues are ultimately limiting without cognitive experiments to back them up, which requires AI. Because we lacked the technology to conduct these experiments throughout history, there became a demand for objective moral codes, often backed by a claimed divine authority. The problem is that all of these "objective moral codes" are based on language, which is fuzzy and can be interpreted in many different ways. The morals and laws of the future will be based on finer-grain physical descriptions and game theory, not abstract words. We cannot perfectly articulate our own moralities because neuroscience needs to progress to the point where we describe our moral behavior more deterministically, in terms of neural activation patterns or maybe something even more fundamental.

Critics might say, "it is your need to formalize ethics as a code that makes you all so uncool". Well, too bad. The motivation here foremost is knowledge, secondarily is the issue that if we don't formalize an ethics, someone else will formalize it for us, and put it into a powerful artificial intelligence that we can't control. We cannot avoid formalizing ethics for machines, and thereby make provocative and potentially controversial statements about human morality in general, because artificial intelligence's long-term growth is unstoppable, barring some civilization-wide catastrophe. Humanity needs to come to terms with the fact that we will not be the most powerful beings on the planet forever, and we need to engineer a responsible transition, instead of being in denial about it.

Promoting machine ethics as a field is challenging because much of the bedrock of shared cultural intuitions regarding morality say that morality is something that can be felt, not analyzed. But cognitive psychologists prove every day that morality can indeed be analyzed and experimented with, often with surprising results. When will the rest of humanity catch up with them, and adopt a scientific view of morality, rather than clinging to an obsolete mystical view?

Filed under: AI, events, friendly ai 3 Comments
4Jul/103

SENS Foundation Los Angeles Chapter, First Meeting

From Maria Entraigues. Here is the event page.

On behalf of SENS Foundation I am writing to you to invite you to join Dr Aubrey de Grey for our first SENSF L.A. Chapter meeting to be held on Friday, July 9th, 2010, at the Westwood Brewing Company (1097 Glendon Avenue, Los Angeles, CA 90024-2907) from 5pm until Aubrey has had enough beer :-)

This will be an informal gathering to create a local initiative to promote the Foundation's interests and mission.

The idea of forming a SENSF L.A. Chapter, which is planned to have monthly meetings, is to create a network of enthusiasts, field professionals, potential donors, sponsors, collaborators, students, etc. Also to promote educational efforts in the area, and to reach out to the Hollywood community and gain their support.

Please RSVP.
We hope you will come and join us!

Cheers!
Maria Entraigues
SENSF Volunteer Coordinator
maria.entraigues@sens.org

18Jun/1086

A Few Items

There's an ongoing uploading debate in the comments with Aleksei Riikonen, Mark Gubrud, Giulio Prisco, myself, and others. The topic of uploading is the gift that keeps on giving -- the dead horse that can sustain an unlimited beating.

There is a new open letter on brain preservation -- sign the petition! Also, there will be workshops on uploading after the Singularity Summit 2010 this August in San Francisco. A big congrats to Randal Koene, Ken Hayworth, Suzanne Gildert, Anders Sandberg, and everyone else taking the initiative to move forward on this.

One last thing: ghost hunting equipment. Harness the power of ghosts, take over the world.

20May/108

2010 H+ Summit @ Harvard

Remember, the 2010 H+ Summit is coming up on June 12-13... here is a blurb.

The 2010 H+ Summit: Rise of the Citizen Scientist (hplussummit.com) is an important 2-day conference that imagines the role of technology in developing the future, with consideration of various emerging technologies and transhumanist ideas.

This innovative summit, to be hosted on June 12-13th, 2010 at Harvard University Science Center by the Harvard College Future Society, with assistance from Humanity+, will feature over 60 incredible speakers, including futurist Ray Kurzweil, inventor Stephen Wolfram, and scientist Aubrey De Grey among many others.

Topics considered will include Human Enhancement, Artificial (General) Intelligence, Longevity, Whole Brain Emulation ("Mind Uploading"), Technology and Democracy, Bioethics, Science Fiction and Science, and Neuroscience among many others.

To learn more and register, visit hplussummit.com. Registration fees are structured to reward early adopters.