Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

30Sep/0914

Giulio Prisco: “I am a Singularitian who does not believe in the Singularity”

See Guilio Prisco's general response to my and Jamais' recent writings here. Here is an excerpt:

As I say above I think politics is important, and I agree with Jamais Cascio: it is important to talk about he truly important issues surrounding the possibility of a Singularity: political power, social responsibility, and the role of human agency. Too bad Jamais describes his forthcoming talk in New York as counter-programming for the Singularity Summit, happening that same weekend, with the alternative title If I Can't Dance, I Don't Want to be Part of Your Singularity. This is very similar to the title of the article If I Can't Dance, I Don't Want to Be Part of Your Revolution!. by Athena Andreadis, a very mistaken bioluddite apology of our current Human1.0 condition against unPC Singularitian imagination. This article is one of many recent articles dedicated to bashing Singularitians, Ray Kurzweil and transhumanist imagination in name of the dullest left-feminist-flavored political correctness. I think I will skip Jamais' talk (too bad, because he is a brilliant thinker and speaker). See also Michael Anissimov's Response to Jamais Cascio.

This is sad. Jamais has named his talk after a post on uploading that assumes to upload would mean leaving behind any kind of body and that Singularitarians are soulless nerd zombies. (Updated: Jamais wrote to Guilio saying that this is wrong, and he named his talk about Emma Goldman's quote. Unfortunate that he emailed Guilio but not me! Therefore I retract my earlier sentence and leave it there only as evidence of my misinterpretation.) This name is especially silly in context of the fact that I privately named my first few years of Singularity activism and planning "Waltz Towards the Singularity". If you think it's silly to name a few years of activism something so flowery, well... maybe you are just a zombie.

The problem with choosing or not choosing to be a part of our "revolution" is that, for better or for worse, there probably is no choice. When superintelligence is created, it will impact everyone on Earth, whether we like it or not, just like the rise of Homo sapiens impacted every species on Earth. Don't blame us -- blame God, or blame the laws of physics for setting the universe up in such a way that smarter-than-human, faster-than-human thinkers are physically possible. Of course, there's always a chance that smarter-than-human intelligence is impossible (there's more of that dogmatic Singularitarian absolutist certainty!), but given that tens of thousands of extremely smart people believe that the brain could theoretically be improved (and you'll see a lot more on that topic if you attend the Singularity Summit this weekend), the possibility ought to at least be seriously entertained.

Filed under: singularity 14 Comments
29Sep/097

Response to Jamais Cascio on “The Singularity and Society”

In Fast Company, Jamais Cascio writes:

Despite the presence of the Singularity concept within various (largely online) sub-cultures, it remains on the edges of common discussion. That's hardly a surprise; the Singularity concept doesn't sit well with most people's visions of what tomorrow will hold (it's the classic "the future is weirder than I expect" scenario). Moreover, many of the loudest voices discussing the topic do so in a manner that's uncomfortably messianic. Assertions of certainty, claims of inevitability, and the dismissal of the notion that humankind has any choice in the matter--all for something that cannot be proven, and is built upon a nest of assumption--do tend to drive away people who might otherwise find the idea intriguing.

I find many of the above claims to be false or at least not easily justified.

First, everything is built upon a web of assumption, and nothing can be proven absolutely. I'm sort of tired of hearing this line of argument as a generalized response to things. Everything is probabilistic, nothing is absolute, and one person's "assumption" is another person's "well-supported hypothesis". Statements like that are just begging the question -- that is, they get all their argumentative power by just assuming the conclusion. (Their deluded actions are based on a nest of assumptions that cannot be proven -- therefore, their actions are deluded.) Clearly, people perform significant acts all the time for important reasons that cannot be "proven". I think there are two kinds of debaters -- people who throw around the word "assumption" as if it were a powerful weapon unto itself, and people who only use the word as a quick place-marker before moving into actually explaining what the assumption is and basing their argument on the questionable content of that assumption. Those who use the word "assumption" in the former sense seem to talk as if our entire lives are not saturated with actions based on assumptions. The point is not whether something is an assumption or not, but how well-supported that assumption is, and I think the word "assumption" is so vague as to often be meaningless.

Next, who are the loudest voices discussing the Singularity concept? My first guess would be that he is referring to Ray Kurzweil and the Singularity Institute (SIAI). Yet, later in his essay he makes a differentiation between the two that makes it likely he is referring to those who believe in the Singularity Institute mission, which mostly trails back to me personally, Michael Anissimov, because I am the loudest online writer/talker for SIAI's view of the Singularity online. (Other people certainly promote SIAI's cause, maybe even more effectively than I do, but do so via channels other than public blogging.) I am the primary blogger here, and recently became the primary blogger at the SIAI and Singularity Summit blogs, so presumably I am at least one of the "loudest voices" out there. Eliezer Yudkowsky, while he has been at this longer than myself, has higher prestige, and gives talks at bigger and better venues, spends a lot of his public writing time focusing on rationality and decision theory, while I explicitly mention the Singularity and discuss reactions to it every week if not every day. Is there anyone else on the entire Internet that blogs every few days on the Singularity/intelligence explosion besides me? I really wish there were, but I can not think of any offhand. Therefore, when I read articles like this, I sometimes think of it as addressing me almost directly.

Assuming again that I and other supporters of the Singularity Institute consist of the "loudest voices" in favor of working towards a friendly Singularity, let me address Jamais' accusations. He asserts that we speak of the Singularity in a way that is "uncomfortably messianic". I think this characterization is unfair. My guess is that it stems from a silent social contract that says that if you think you have identified a high-leverage action point for improving the future of humanity (like the intelligence explosion), you must be very humble about it unless you have tremendous social status, like Al Gore. Because the most prominent supporters of SIAI tend to have moderate rather than tremendously high social status, and we are claiming we have identified a concentrated source of expected utility that is controversial, we have "overstepped our bounds". When we intrude into the memetic territory of adjacent competing movements, the natural response is to tear us down at any cost. The alternative, ignoring us, has begun to take on relatively high costs due to our rising prominence.

There are a lot of causes with passionate supporters. Causes for political freedom, to protect the Earth, to clear landmines, to supply clothing to the homeless. The thing about these causes is that most of them are inherently relatively small-scale and humble. The Singularity is also a cause with passionate supporters, but there is a difference. Singularity advocates believe that a beneficial intelligence explosion could produce a greater benefit for humanity that all prior technological achievements combined. The method would be harnessing the power of substrate-independent, endlessly copyable, recursively self-improving greater-than-human intelligence. Everything about this reeks of hubris, and if there is indeed such a huge opportunity, it is sufficiently large that it apparently dwarfs the importance of most if not all other causes. As Dr. Nick Bostrom writes in Ethical Issues in Advanced Artificial Intelligence:

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to in joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

Because the immense potential benefits that superintelligence could confer to us, devotees of other causes which come into contact with the memetic space surrounding the supporters of the Singularity are forced to pick one of two choices: 1) join the cause, 2) ignore it, or 3) tear it down as aggressively as possible. 2 used to be viable, but less so now because our arguments are gaining major traction. The intelligence explosion is not a cause that asks to be followed gently. By its very nature, it is a hypothetical event with high stakes. If it is possible at all, it will be huge. Even if the probability of a Singularity occurring at all were viewed as very low, most expected utility calculations would still grant it an overarching importance, due to the expected magnitude (good or bad) of its impact.

Thus, accusations of cults, messianism, obsession, and the like. The reason why these accusations are produced has nothing to do with the behavior of the followers themselves, and everything to do with the magnitude of the hypothetical event itself. There is a cultural heuristic that exists that any prospective cause with stakes sufficiently large must be a lie. This is because charlatans throughout history have used such made-up causes for personal gain. L. Ron Hubbard is the most prominent recent example. There is also a history of such causes being considered impolite to talk about, even if they are considered vaguely legitimate, because they can cause discomfort: the challenge of nuclear disarmament, for instance. (Obama is the first President in my lifetime to really take the issue seriously, and I applaud him for that.) The only reason that the cause of fighting global warming has achieved such prominence is because it synchs very well with the entire Gaianistic memetic supercomplex, which is a confusing mix of rational and irrational causes and tendencies.

Could the intelligence explosion be a made-up cause? Suppose it is. If so, why does the Singularity movement lack all the features of historical cults built around made-up causes? Why bother with all the items on Steven Smithee's list: commitment to rationality, naturalism, uncertainty, emphasis on technological action, the outcome being contingent on human action, no in-group perks, no religious trappings like rituals, worship, or holy writings, no revenge fantasies, and no anthropomorphism of nature? Only in the past few years has there even been any substantial physical contact between the devotees of the cause. Before that, the whole thing was on the Internet.

In his essay, Jamais refers to "Assertions of certainty, claims of inevitability, and the dismissal of the notion that humankind has any choice in the matter" on the part of Singularity advocates. These accusations are absolutely false, and only gain superficial legitimacy from the magnitude of the claims of potential benefit and harm from superintelligence. There is a cultural heuristic that goes: "If person X claims that cause Y could have a massive potential benefit or harm, then they must be making assertions of certainty, claims of inevitability, and dismissal of human choice." Basically, "large magnitude claims = probably crazy". Simple as that. The reason this cultural heuristic exists can be summed up in one word: religion. Religions throughout history have operated in this manner. Because the Singularity movement makes such large claims about the potential upsides and downsides, there exists no set of behaviors we could practice which would let us avoid accusations of messianism or cultism. The only way to avoid such accusations would be to abandon the cause entirely.

The accusations of certainty are remarkable in light of our widely-acknowledged preoccupation with probabilistic thinking and uncertainty. The Singularity Institute is composed of dedicated Bayesians, who can't even be forced into assigning a 100% probability to anything, even the Sun rising tomorrow, because 0 and 1 are not probabilities. Furthermore, a significant portion of SIAI's most dedicated support base spent an entire summer a year ago building a probabilistic model of future possibilities that accommodates arbitrary degrees of uncertainty. This work was recently presented at the European Conference on Computing and Philosophy. Béla Nagy, who works at the Santa Fe Institute, mentioned our research in his recent interview with David Orban, because he has been working on something similar. An alpha version of our model is online at www.theuncertainfuture.com. Uncertainty is such a prominent feature of what we did that the project has the word "uncertain" in its very title! If we were excessively certain and believed in inevitabilities, then why did we go to the all the trouble of spending over $10,000, five people, and an entire summer on developing a model that made a serious contribution to the accommodation of uncertainty in futurism? It doesn't make any sense.

To the contrary, it is Jamais' habit -- an old one in futurism -- of creating vivid futuristic scenarios that encourages excessive certainty about future possibilities. As mentioned in "Cognitive biases affecting judgement of existential risks", as well as hundreds of mainstream papers on cognitive bias, the conjunction fallacy exacerbates bias:

According to probability theory, adding additional detail onto a story must render the story less probable. It is less probable that Linda is a feminist bank teller than that she is a bank teller, since all feminist bank tellers are necessarily bank tellers. Yet human psychology seems to follow the rule that adding an additional detail can make the story more plausible.

The "Linda" name is in reference to a classic experiment in the field of heuristics and biases where respondents hear a story about "Linda" that vaguely makes her out to be a feminist, and they subsequently estimate a higher probability that she is a feminist bank teller rather than a bank teller, which is mathematically impossible. A subset of a category cannot be larger than the category in which it is located. Obviously. Still, practically anyone can be fooled into making this mistake all the time.

Scenario building exacerbates bias by taking what might be a vaguely plausible core and then adding additional details onto it to make it more fun and believable. But these details are nothing more than fiction which misleads us into overestimating the probability of certain scenarios based on how interesting they sound. Our new approach to futurism involves making a probabilistic model with a series of individually tunable nodes that are linked to each other based on mathematically simple and transparent ways. For instance, one node in our model involved the computational capacity of the human brain, while another referred to the speed of a computer you can buy for $1000. Instead of inputting a precise value for either of these, you input a probability distribution, which makes a lot more sense for variables for which the precise value is uncertain, i.e., practically all of them.

Back to Jamais' accusations again. He wrote that we "(dismiss) the notion that humankind has any choice in the matter". This is also completely false. Humanity does certainly have choice in the matter of how the intelligence explosion goes -- I seem to repeat this to interlocutors practically every single day, and it never sinks in. In fact, humanity may have choice over whether an intelligence explosion happens at all. Ban AI research, and we would choose (at least for a while) not to have one. If we choose to build a recursively self-improving AI with the morality of a thermostat, that AI then puts us all at risk. If we choose to investigate the underlying structure of human moral decision-making and apply those insights to AI design, we might actually not kill ourselves with AI. These are all choices. Jamais' choice seems to actually be the most useless -- "we should choose to mostly ignore the possibility of an intelligence explosion, because it has such low probability that we should do nothing in terms of research or software engineering to prepare for it". He seems to imply that social and political progress will magically confer useful structure into the goal system of the first Artificial Intelligence to surpass human intelligence, or that the process will be so gradual that we can mostly ignore the implications ahead of time. Perhaps I'm being overly harsh here, but that is the impression I've gained by reading his articles.

Even if I assigned a relatively low probability to the possibility of an intelligence explosion, its expected impact is still so high that I would wholeheartedly encourage research into instilling benevolence into AI goal systems. Couldn't hurt, right?

The vast majority of people out there seem to think that AI will be inevitably bad, or inevitably good, or inevitably neutral. Our group is one of the very few making the seemingly obvious statement that whether it is bad or good may in some way be related to its initial goal system and motivations. Duh, right? Well, not exactly, because moral realism has led most people to believe that morality is an inherent feature of the universe that every agent spontaneously stumbles upon, therefore any sufficiently intelligent agent will clearly come to the same conclusion that they have. This is a diabolical convergence between moral realism and the blank slate fallacy.

In his article, Jamais mentions that there are several definitions of the Singularity. The first he mentions is "where technologically driven changes have hit so hard and so fast that people on the near side of the Singularity wouldn't be able to understand the lives of people living on the far side of one". If this is a definition of the Singularity, I have to ask -- where did it come from? Anyway, if that's what he considers the Singularity, then why not just focus on that definition alone? Why are we fighting over the "Singularity" idea if what we are focusing on are entirely different concepts?

Shortly after, Jamais introduces the real definition of the Singularity, Vernor Vinge's, which defines the Singularity as the event horizon of understanding created by a fundamentally greater-than-human intelligence. I've spent a lot of effort trying to keep this as the definition of the Singularity in blogosphere discussion, and I think I've had some degree of success.

When Jamais says that he thinks that the Singularity should involve more discussion of politics, or culture, and so on, is he talking about the intelligence explosion Singularity or the lots-of-technology Singularity? Like so many people who casually talk about the Singularity, it seems almost impossible to tell. In his summary of his upcoming talk in New York, he mentions the intelligence explosion, so when I read that I assumed that's the Singularity he was talking about, but in his article he reveals that he prefers the technological acceleration version. The degree to which politics and culture would be involved in "the Singularity" obviously varies greatly depending on which Singularity we are talking about. The problem here is that Jamais refuses to pick one, instead fusing the parts of the meme he likes into an entirely new one of his own design. This is like biologists redefining "evolution" only to highlight those aspects of the process they feel are most suitable to comment on. It's a free country, so anyone can talk about whatever they want, but if you mean to have a real discussion with other people and spend hours on writing articles for that very purpose, then why not use a standard definition?

Jamais says that an argument could be made for the printing press being a "slow-motion Singularity". This definition is considerably remote from any prior definitions of the Singularity I've ever heard of. By his definition, clothing is a slow-motion Singularity, because it would be hard for naked hominids to imagine a society where everyone is clothed and clothes are mandatory. It would be hard for hominids to comprehend a world that routinely adds and subtracts numbers greater than 10, for that matter. When one frivolously dilutes a concept to discuss it at one's own convenience, one is essentially depositing waste into the epistemological commons. The Singularity meme now resembles a tattered blanket, with so many pulled out threads that little of its original substance remains. Just because a thread is loose, is that a justification to pull it out as far as one desires? For those who are uncomfortable with the intelligence explosion idea because it reflects unfavorably on their life goals and worldview, I suppose it is natural to take advantage of the popular unfamiliarity with the Singularity idea and twist and deform it into whatever is most suitable for the argument of the day.

Starting to wrap up here (thanks for reading this far!), Jamais mentions:

Yet few of the discussions about the Singularity -- pro or con -- move beyond the technology. Can machines think? Will IA (intelligence augmentation) beat AI (artificial intelligence)? How many teraflops does a brain run? There's too little discussion of how the social, cultural and political choices we make would shape the onset or even the possibility of a Singularity.

Yes Jamais, that's because the technological facts matter. If superintelligence can kill us all if it doesn't explicitly value us, which is what many careful, non-messianic, non-sub-cultural thinkers like Stephen Omohundro, Stephen Hawking, and Nick Bostrom believe, then the technological facts about whatever might produce superintelligence matter a whole lot. The reason why Jamais is uncomfortable with technological discussion is because he is a profoundly untechnical person. His book on Geoengineering, called "Hacking the Earth", does not contain a single measly equation. In fact, it contains barely any numbers related to objective physical values at all, just years, mentions of quantities of money, and a few scarce mentions of the area of the Siberian permafrost, how many billions of tons of CO2 we are pumping into the atmosphere, one percentage referring to the oxidization of atmospheric methane, and some imprecise handwaving about "100 billion tons of methane". The book does not reference a single scientific journal, and the majority of the references are to his own blog posts. I'm not saying that it's a bad book (it's not), just that it is not strong on the numerical/factual side. It makes a lot of social/political arguments, which I think are completely fine, but only combined with arguments about scientific facts.

For example, in his book and in blog posts, Jamais constantly twists the science about oceanic thermal inertia to increase the palatability of his message that we should consider geoengineering the planet. (I think we should geoengineer the planet, but that is beside the point.) In his book, Jamais states that "The slow pace at which the planet's temperature adjusts to perturbations, or Earth's thermal inertia, means that we're only now seeing the temperature results from environmental changes from twenty or thirty years ago." As far as I can tell, this number was pulled out of nowhere. The original studies in Science, which Jamais is presumably referring to (supported by the fact that I see it linked from his former website, WorldChanging), refer to a century or more. A century is not the same thing as 20-30 years. One is short enough to appeal to peoples' preference to only do things that will have an impact within their lifetimes, one is not. Jamais chooses a number which he knows will not be large enough to potentially discourage people from taking action now to fight global warming.

(Personally, I think we ought to take action now to prevent global warming regardless, but I know very well that the current scientific data points to thermal inertia of a century or more, not 20-30 years, so if runaway climate change is destined to happen in 20-30 years, we are absolutely screwed, and no amount of reduction in carbon emissions will do a damn thing.)

So, in conclusion:

1) Where do Singularity advocates express positions of certainty, inevitability, and/or the inability of humans to impact the way it goes down? Name sources. Certainly one can find claims of inevitability in even the most mundane places, such as claiming that the world population will inevitably surpass 10 billion given continued population growth. In the same sense, Singularity students "inevitably" believe that intelligence will be produced artificially and extended beyond the human level if A) scientific and technological progress continues, and B) intelligence is a physical process. Nothing is believed without conditions.

2) If you're going to talk about why culture and politics are relevant to the Singularity, it's important to mention which Singularity you mean, and if you just mean "substantial technological progress that makes things look weird", then why not say that? New technologies have been making the future look weird to the past for decades, and nothing about that is new.

3) If the intelligence explosion idea makes you uncomfortable, maybe it's because you've barely been exposed to the underlying technical arguments, and only view the social effects on people you see in the blogosphere or at Singularity Summit. This is like staring at the shadow on the cave wall in Plato's allegory. Why criticize a shadow when there is an actual object casting the shadow and that object (the arguments) are anything but inaccessible? Try starting here.

4) If you believe the intelligence explosion is implausible, say so, but in what way do advocates of the intelligence explosion behave inappropriately, bearing in mind that they believe in the plausibility of the event? If you personally believed that a superintelligence could mean the death or salvation of humanity and its initial source code as a seed AI mattered for which outcome was produced, how would you behave? Can you prove that you are at least superficially familiar with the arguments for why the initial motivations matter, and for why a superintelligence could potentially become very powerful very quickly? Assuming that you lack such familiarity, I think you are postulating that the epistemological heft of these arguments come from subcultural dynamics rather than logical reasoning. If you've never looked at the logical reasoning, then clearly your best model will describe the phenomenon in terms of subcultural dynamics.

Filed under: singularity 7 Comments
28Sep/096

More Transhumanism on Television


Watch CBS News Videos Online

Isn't this show relatively new? It seems like both CBS and MSNBC have started new news shows on futurism and transhumanism.

28Sep/090

Aubrey on MSNBC Today

Visit msnbc.com for Breaking News, World News, and News about the Economy

28Sep/090

Ray Kurzweil in The Independent

Another Kurzweil article, this time in The Independent. It's titled "By 2040 you will be able to upload your brain..." Make what you will out of it. SIAI is mentioned, so that's good.

Just so reporters know (I've been getting a few questions lately), the Singularity Institute was formed by AI researcher Eliezer Yudkowsky and Internet entrepreneurs Brian and Sabine Atkins in 2000, not by Ray Kurzweil. Ray is a Director of the Singularity Institute who has done a lot to inform the world about the promise and peril of the Singularity.

Filed under: SIAI, singularity No Comments
28Sep/091

NYT: Quest for a Long Life Gains Scientific Respect

Thank you for covering this, Nicholas W! Opener:

BOSTON -- Who would have thought it? The quest for eternal life, or at least prolonged youthfulness, has now migrated from the outer fringes of alternative medicine to the halls of Harvard Medical School.

Bwa ha ha ha ha ha! They thought we were mad! We'll show them who's mad!

They forgot to mention the most important stepping stone between alternative medicine and Harvard Medical School -- Aubrey de Grey and his supporters. Without him, this never would have happened so soon.

Filed under: life extension 1 Comment
28Sep/090

Stephen Wolfram Will Attend Singularity Summit 2009

Stephen Wolfram, the British polymath known as the creator of Mathematica and Wolfram Alpha and the author of A New Kind of Science, will be attending the Singularity Summit 2009 next weekend. Here is his bio. He will engage in a "Conversation on the Singularity" with Gregory Benford, at 3 PM on Saturday, just before David Chalmers' talk.

Filed under: events, SIAI No Comments
27Sep/090

Harvard, MIT AI Research Groups

Harvard has a nice, concise list of topics discussed at their AI research group. Seven faculty are listed on the people page.

For a similar group at MIT, see the MIT Computer Science and AI Lab.

Filed under: AI No Comments
27Sep/091

10th Woodstock Film Festival Focusing on Transhumanism

From The New York Times:

The 10th Woodstock Film Festival will focus on the future and Transhumanism, a movement that would build a bridge from technology to the human condition.

Here is the full article. Ray and Martine will be on a panel. Here's the segment about it:

A highlight will be a panel discussion, “Redesigning Humanity — The New Frontier,” featuring scientists and ethicists. One panelist, Raymond Kurzweil, is an author and trailblazer in the field of artificial intelligence; another, Martine Rothblatt, began the first satellite radio company and is active in bioethics, gender freedom and antiracism causes.

Congrats to Ray and Martine. I hear the movie they've been working on, The Singularity is Near, will be out soon -- maybe it will premiere there? Don't quote me on that, I am just speculating.

Lots of articles on transhumanist topics have been coming out in the NYT lately. What the Times needs is a good overview article on transhumanism, its position on (bio)political issues, and some of its major figures. That would help bring the disparate threads together.

Filed under: transhumanism 1 Comment
27Sep/090

Thiel Foundation Website Online

Check out the website for the brand-new Thiel Foundation:

The spotlight effort right now is the Oslo Freedom Forum, which looks interesting. Here's a quote from the 33-year old founder, Thor Halvorssen:

"We all should want freedom of speech, freedom of association, freedom from torture, freedom to travel, due process and freedom to keep what belongs to you." Unfortunately, he explains, "the human-rights establishment at the United Nations is limited to pretty words because so many member countries kill or imprison or torture their opponents."

Ambient pressure like this can help encourage the UN to better advance human rights.

The projects of the Thiel Foundation can be broken down into 3 general areas -- anti-violence, freedom, and science and technology. Anti-violence projects include Imitatio and the Oslo Freedom Forum. Freedom projects include the Committee to Protect Journalists, The Human Rights Foundation, and the Seasteading Institute. The science and technology projects include funding Cynthia Kenyon (who studies the biology of aging), Aubrey de Grey (SENS Foundation), and the Singularity Institute for Artificial Intelligence. Looks like a pretty well-rounded philanthropy portfolio to me.

Filed under: random No Comments
27Sep/090

h+ Magazine Hits Newsstands Tomorrow!

From the WTA-talk list:

Hi Fellow Transhumanists,

I just wanted to let you know that the Fall issue of h+ Magazine is appearing on newsstands now. Its official release date is the 28th, but I was just at a Barnes & Noble in Palisades, NY and Paramus, NJ and they had it out already! All 720 Barnes & Noble stores, as well as many Borders, Books-A-Million and about 550 college bookstores will carry it.

It's crucial that the few copies which each store is carrying sells out so that they agree to carry this permanently. Please buy one if you can, and Tweet your friends to buy them in obscure locations too!

Thanks so much for helping us spread the word and carrying the h+ meme into homes and dorms around the country! If we do well in the U.S., we should be able to get a foreign distributor to carry us in Europe too.

Best regards,
James Clement
Publisher, h+ Magazine

Filed under: transhumanism No Comments
25Sep/0962

10 Reasons

My "10 Reasons" document from August 2004 is getting some great play on StumbleUpon and other venues. 5,000 visits so far this month. Check out my "10 Reasons to Develop Safe Artificial Intelligence":

1. Because human cultures aren't exotic enough.
2. Because intelligence should be fluid, not rigid.
3. Because we need someone to help us organize the data we're drowning in.
4. Because aliens aren't showing up, we should make our own.
5. Because a virtual world would be a cool place to grow up in.
6. Because we need new perspectives and thinkers.
7. Because it would be interesting to engineer new emotions.
8. Because sci-fi stereotypes need to be shattered.
9. Because humans are often biased away from the common good.
10. Because AI is coming whether we like it or not, so it might as well be safe.

Ironic that Jamais Cascio has accused Singularitarians like myself as not being interested in culture. It's not a matter of dancing, it's a matter of survival. If we do not program the first recursively self-improving seed AI appropriately, we will all perish. And death is so final.

Filed under: transhumanism 62 Comments