Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

22Apr/093

The Archaism of the Old Rich

To amuse myself with some light reading after the obscenely lengthy Golden Bough, I'm going through Class: A Guide Through the American Status System by Paul Fussell, a really amusing book. Here's a recent review by The Atlantic.

Reading a passage in the book confirms for me what I've had a feeling for all along: the upper classes (and would-be upper classes) have a distinct antipathy for thinking about the future. Here's the passage:

We've already seen that organic materials like wool and wood outrank man-made, like nylon and Formica, and in that superiority lurks the principle of archaism as well, nylon and Formica being nothing if not up-to-date. There seems a general agreement, even if often unconscious, that archaism confers class. Thus the middle class's choice of "colonial" or "Cape Cod" houses. Thus one reason Britain and Europe still, to Americans, have class. Thus one reason why inheritance and "old money" are such important class principles. Thus the practice among top-out-of-sight and upper classes of costuming their servants in some archaic livery, even such survivals as the white apron on the maid or, on the butler, a striped vest. It's a way of implying that the money goes back a considerable time, and that one retains the preferences and habits one learned very long ago.

What Verblen specified as the leisure class's "veneration of the archaic" shows itself everywhere: in the popularity among the upper-middle class of attending opera and classical ballet; of sending its issue to single-sex prep schools, because more unregenerate and old-style than coed ones; of traveling to view antiquities in Europe and the Middle East; of studying the "humanities" instead of, say, electrical engineering, since the humanities involve the past and studying them usually results in elegiac emotions. Even the study of law has about it this attractive aura of archaism: there's all that dog Latin, and the "cases" must all be rooted in the past. Classy people never deal with the future. That's for vulgarians like traffic engineers, planners, and inventors. Speaking of the sophisticated TV viewer's love of old black-and-white films, British critic Peter Conrad comments, "Style for us is whatever's perished, outmoded, lost." Since the upper orders possess archaism as their very own class principle -- even their devotion to old clothes signals their retrograde sentiment -- what can the lower orders do but fly to the new, not just to sparkling new garments but to cameras and electronic apparatus and stereo sets and trick watches and electric kitchens and video games?

Uh-oh, looks like I'd better throw away all my video games!

In San Francisco, where the archaism of the wealthy set collides with the futurism of young startup mavens on a daily basis, these observations couldn't be more useful or enlightening.

Usually, ignoring the future isn't that huge of a deal, but humanity is at a special point in history where accelerating technological change is giving us tools with powers far beyond what we would have anticipated, thereby putting us in great danger until we can enhance our intelligence and compassion along with our technology.

Filed under: futurism 3 Comments
21Apr/0937

Nuclear Weapon UAVs

It isn't mentioned often, but there is another dimension to the nuclear threat that could become real within 10-20 years -- miniaturization of nuclear weapons continuing to the point where a nuclear weapon consists of several UAVs that converge on a location, assemble into a complete bomb, and detonate. You could use redundancy to ameliorate the risk of one of the UAVs getting shot down.

There are numerous strategic/military advantages which give this weapon a high probability of eventual development. Obviously, you would avoid using a missile, which shows up pretty definitively on a radar screen. For a first strike, this is tremendously important. Another advantage could be self-detonation in the event of discovery, something difficult to implement with conventional missiles.

Update: this technology would have a significant advantage over using UAVs alone because the warhead that could fit on a single UAV would have to be very small, and would have frustratingly low yield. A warhead built from converging components could have arbitrary yield, while retaining the stealth benefits of UAVs.

Filed under: nuclear 37 Comments
21Apr/0929

How to Sign Up for Cryonics

So easy... just sign up for a quote at Rudi Hoffman's website. Rudi takes care of more than 90% of the life insurance for cryonics market. For most people, monthly payments for cryonics-dedicated life insurance policies are very cheap. "Less than the cost of an ice cream cone a day", as someone recently put it in an article on cryonics in the Daily Mail.

Update: Rudi is authorized for selling life insurance in the USA only, but you can get similar low prices around the world.

I also realized that there is an amusing double meaning on the home page: "You will enjoy a sense of clarity and accomplishment as we comfortably help you crystallize and move towards your goals and dreams." (Emphasis added.) Comfortably help us crystallize, huh? :)

Filed under: cryonics 29 Comments
17Apr/092

Aubrey de Grey on the Immortality Institute’s Sunday Evening Update

The Immortality Institute (ImmInst) is a grassroots life extension advocacy organization that I co-founded in 2002 with Bruce Klein and Susan Fonseca-Klein. On Sunday, the Executive Director of ImmInst, Justin Loew, will interview the SENS Foundation's Aubrey de Grey on his weekly live update. The show will include a live video feed of Loew as he speaks to Aubrey via audio.

Loew says:

One of the first topics I will want to delve into is the recent restructuring of the Methuselah Foundation - split into 2 entities.

As always, whether research or outreach related, please list questions for Aubrey here in the forum so we can compile a list for the show.

If you're interested in asking Aubrey a question, register for the ImmInst forums and post your question in that thread.

Also, Loew says:

I will want to get a sense of how things have progressed over the last few years. Aubrey's ideas have been around a while and MF has grown quite a bit. What have been and continue to be the biggest stumbling blocks to achieving indefinite life extension? What is the biggest success thus far?

Should be informative for those interested in new developments at the Methuselah and SENS Foundations.

Filed under: life extension 2 Comments
17Apr/090

Accelerating Future on Facebook

Follow this blog on Facebook using the NetworkedBlogs app. Thanks for reading!

Filed under: meta No Comments
16Apr/0911

50 Years of Stupid Grammar Advice

Continuing with the theme that Michael Vassar mentioned in our interview, that "collective wisdom" is really wrong about a whole heck of a lot, and that we should doubt the basic sanity of the world, Robin Hanson links an article in The Chronicle of Higher Education, "50 Years of Stupid Grammar Advice", that completely trashes The Elements of Style by Strunk and White, long considered the Bible of writing and grammar. Every serious writer is supposed to have it.

It opens thus:

April 16 is the 50th anniversary of the publication of a little book that is loved and admired throughout American academe. Celebrations, readings, and toasts are being held, and a commemorative edition has been released.

I won't be celebrating.

The Elements of Style does not deserve the enormous esteem in which it is held by American college graduates. Its advice ranges from limp platitudes to inconsistent nonsense. Its enormous influence has not improved American students' grasp of English grammar; it has significantly degraded it.

The author, Geoffrey K. Pullum, is head of linguistics and English language at the University of Edinburgh. The entire article is great and causes me to completely question the advice I've received from senior writers over the last few years. Let me skip to the last paragraph, for the conclusion:

So I won't be spending the month of April toasting 50 years of the overopinionated and underinformed little book that put so many people in this unhappy state of grammatical angst. I've spent too much of my scholarly life studying English grammar in a serious way. English syntax is a deep and interesting subject. It is much too important to be reduced to a bunch of trivial don't-do-this prescriptions by a pair of idiosyncratic bumblers who can't even tell when they've broken their own misbegotten rules.

How could tens of thousands of English teachers have missed all these obvious-in-retrospect arguments over the last 50 years?

Filed under: rationality 11 Comments
16Apr/0951

Eurekalert: How to deflect asteroids and save the Earth

Here's a nicely worded press release that touts research into asteroid deflection:

You may want to thank David French in advance. Because, in the event that a comet or asteroid comes hurtling toward Earth, he may be the guy responsible for saving the entire planet.

French, a doctoral candidate in aerospace engineering at North Carolina State University, has determined a way to effectively divert asteroids and other threatening objects from impacting Earth by attaching a long tether and ballast to the incoming object. By attaching the ballast, French explains, "you change the object's center of mass, effectively changing the object's orbit and allowing it to pass by the Earth, rather than impacting it."

Sound far-fetched? NASA's Near Earth Object Program has identified more than 1,000 "potentially hazardous asteroids" and they are finding more all the time. "While none of these objects is currently projected to hit Earth in the near future, slight changes in the orbits of these bodies, which could be caused by the gravitational pull of other objects, push from the solar wind, or some other effect could cause an intersection," French explains.

So French, and NC State Associate Professor of Mechanical and Aerospace Engineering Andre Mazzoleni, studied whether an asteroid-tether-ballast system could effectively alter the motion of an asteroid to ensure it missed hitting Earth. The answer? Yes.

"It's hard to imagine the scale of both the problem and the potential solutions," French says. "The Earth has been hit by objects from space many times before, so we know how bad the effects could be. For example, about 65 million years ago, a very large asteroid is thought to have hit the Earth in the southern Gulf of Mexico, wiping out the dinosaurs, and, in 1907, a very small airburst of a comet over Siberia flattened a forest over an area equal in size to New York City. The scale of our solution is similarly hard to imagine.

"Using a tether somewhere between 1,000 kilometers (roughly the distance from Raleigh to Miami) to 100,000 kilometers (you could wrap this around the Earth two and a half times) to divert an asteroid sounds extreme. But compare it to other schemes," French says, "They are all pretty far out. Other schemes include: a call for painting the asteroids in order to alter how light may influence their orbit; a plan that would guide a second asteroid into the threatening one; and of course, there are nukes. Nuclear weapons are an intriguing possibility, but have considerable political and technical obstacles. Would the rest of the world trust us to nuke an asteroid? Would we trust anyone else? And would the asteroid break into multiple asteroids, giving us more problems to solve?"

The asteroid risk is a great one to get people acquainted with the concept of catastrophic risk in general because it is statistically pinned down very well. However, according to some calculations, the risk of a civilization-ending asteroid hitting Earth in the next 100 years is only 1/5,000, leading to a 1/500,000 annual probability. Say we give a 1/500 annual probability estimate of the end of civilization due to nuclear war. (Seems like quite the underestimate.) According to standard cost-benefit analysis, we should assign roughly 1,000 times more importance to the task of minimizing the chance of catastrophic nuclear war than to deflecting asteroids.

We may see some common miscalculations on this score, as asteroids are new and exciting and nuclear war is the same boring old risk that has been around for over half a century.

Filed under: risks, space 51 Comments
16Apr/096

Molecular Manufacturing on Fox News

Michio Kaku, who qualifies as a superlative futurist if there ever was one, discussing technologies like time machines that most transhumanists consider implausible, recently went on Fox News to talk about molecular manufacturing and the "Second Industrial Revolution" (actually, there already was a Second Industrial Revolution, by some accounts). Note how even the Wikipedia page for "Second Industrial Revolution" has a mention of molecular manufacturing;

At the start of the 21st century the term "second industrial revolution" has been used to describe the anticipated effects of hypothetical molecular nanotechnology systems upon society. In this more recent scenario, the nanofactory would render the majority of today's modern manufacturing processes obsolete, transforming all facets of the modern economy.

Here is the quote from the interview where Kaku mentions MM:

It could create a second industrial revolution. The first industrial revolution was based on mass production of large machines. The second industrial revolution could be molecular manufacture. We're talking about a new way of manufacturing almost everything. Instead of having robots that are gigantic and clumsy, you now have molecular robots, because what does a virus do? A virus cuts and splices and dices other molecules. So why not use that molecular ability to create a whole plethora of things for the computer age and the electric age? And so this could remove many bottlenecks in our manufacturing industry.

The idea of MM is flopping around in the mainstream, just like it used to before the National Nanotechnology Initiative starting labeling any nanoscale research "nanotechnology".

The concept of molecular manufacturing is slightly perturbed by it very often being introduced in the context of a suite of possible future technologies. This is what books like Nanosystems are for -- just MM, nothing else, purely a physics-based analysis. This allows you to estimate the long-term plausibility of the technology on its own terms, even if your estimate is guaranteed to be off.

H/t to Chris Phoenix at CRN.

Filed under: nanotechnology 6 Comments
15Apr/0937

Interview with Singularity Institute President Michael Vassar

Michael Vassar was recently appointed as President of the Singularity Institute for Artificial Intelligence (SIAI), an organization devoted to advocacy and research for safe advanced AI. On a recent visit to the Bay Area from New York, Michael sat down with me in my San Francisco apartment to talk about the Singularity and the future of SIAI.

Accelerating Future: What is the Singularity Institute for?

Michael Vassar: Sooner or later, if humanity survives long enough, someone will create human-level artificial intelligence. After that, the future will depend on exactly what kind of AI was created, with what exact long-term goals. The Singularity Institute's aim is to ensure that the first artificial intelligences powerful enough to matter will steer the future in good directions and not bad ones. Put more technically, the Singularity Institute exists to promote the development of a precise and rigorous mathematical theory of goal systems -- a theory well enough founded that we can make something smarter and more powerful than we are while still knowing it will create good outcomes. This requires extending current theoretical computer science to include rigorous models of reflectivity, and extending current cognitive science to include rigorous models of what outcomes humans consider "good".

AF: Who are the primary employees of SIAI and what do they do?

Vassar: The main employees are Eliezer Yudkowsky, our founding AGI Research Fellow, Anna Salamon and Steve Rayhawk (two more recently recruited AGI researchers), myself, and our administrator Alicia Isaac.

The AGI researchers do mathematical work on AGI and on AGI Friendliness, look at potential recruits, and do side projects such as running www.lesswrong.com and building software for singularity timeline modeling and for singularity educational outreach. I network with donors and potential donors while explaining our organization and organizing the Singularity Summit. Along with some volunteers and affiliates I am also putting together an essay contest in order to attempt to elicit creative ideas and analysis regarding the technological opportunities and dangers humankind faces. I think this is important, as there may be any number of potential global catastrophic risks which are only understood in tiny communities and which deserve wider attention.

AF: Can you tell us the story of how you first found out about SIAI's mission, and why you think it matters?

Vassar:Well, there are really two questions here. The first relates to asking how I concluded that SIAI could have an impact. The second relates to how I concluded that their goal was important.

Regarding the first question, the major influence was my progressive discovery of the inadequacy of the deliberative and decision-making organs of modern society. I saw fundamentalism. I saw the War on Drugs. I saw failure to adequately secure nuclear materials in Eastern Europe and failure to build adequate levies in the world's richest nation. Eventually I integrated all of these facts into my world-view rather than leaving them as dangling exceptions to an unchallenged assumption that the collective behavior of the world around me was basically sane. It became clear that if a technological singularity this century was pretty likely I should still expect that by default no one with any serious power would react rationally to the possibility until much too late. Warren Buffet, Sam Nunn, and Ted Turner get kudos for being an exception with their Nuclear Threat Initiative, but they are an exception that proves the rule. Yes, the powers that be could really be collectively stupid enough to hear about the singularity, acknowledge it in the occasional speech, but generally ignore it and allow it to happen in whatever manner is the easy default, even if that default is human extinction. They collectively mess up much easier issues all the time.

Speaking with a very wide variety of people, I also discovered that a singularity in the 21st century didn't actually violate common sense about the 21st century. This is because there is no common sense regarding the future, just cliches taken from science fiction. If you ask most smart educated people to describe the world in 20, 50, or 100 years you will get basically the same random mix of sci-fi cliches. Their answer will actually only address a question about a generalized fictional "future" without regard for chronological distance. If you ask them about their own life 20 or 50 years from now you don't elicit any sci-fi schemas. They answer as if in 50 years they will be living in the current year.

My other reason for thinking that SIAI can matter is that there seems to me to be precedent. My read of scientific history, especially of medical history, strongly suggests that small groups of thinkers operating outside of the mainstream but in contact with it really can reach correct conclusions that clash with the mainstream. When they do, they can either fail, like Ignaz Semmelweis, or succeed, like Florence Nightingale, in winning over the mainstream decades earlier than would otherwise occur. Success seems to depend significantly on not believing that in order to be rational one must pretend that everyone else is rational.

Regarding the second question, honestly... it's obvious. The first story about robots was one about them destroying humanity... except Asimov, so were most of the others. The first written story we know about was Gilgamesh, about man's quest for immortality. The second, Eden, also. The very act of writing stories at all is in a sense a more easily achievable attempt at a sort of limited immortality. I think that human survival and flourishing in the abstract matters because I'm human and to be human implies having preferences from which it can be inferred that human survival and flourishing matter.

AF: When will the Singularity happen?

Vassar: Hopefully, as soon as it can happen safely. More probably, before then, in which case not merely humans but humanity itself will perish. I think we almost definitely have a couple more decades. If I could choose, I'd say millennia and hope conventional life extension works out well enough to save most existing lives, but sadly I can't. Thirty or forty years would be enough time for humankind to get its act together if a few hundred capable people made a serious effort starting today. A few dozen capable people already have.

AF: Why can't we just take for granted that the Singularity will go well, due to a gradual merger of humans and machines?

Vassar: I'm not very confident that even our development so far can fairly be said to have gone well from the perspective of past humans. We may be satisfied with what we are, but much of what they valued, the thrill of violent triumph over their enemies for instance, or in most cases even that of hunting no longer appeals to us. Will love, excitement, curiosity and the other things we value be likewise lost? Must we accept that? Humans may in time merge with machines, but that leaves a great deal unsaid. Will they merge as cells merge into a body, a concerted organizational whole each of the parts of which retains the complexity of its ancestors and then some, or in another manner? Cows and chickens frequently merge with contemporary humans, and while this benefits their genes it's not very satisfactory for them.

Today, humans are better than computers at many things and they are better than us at other things. In such a situation man and machine are complements and cooperation is mutually beneficial. They have not yet filled our ecological niche or economic role. Once computers can do everything that humans can do they will, by default, fill our role. This need not be disastrous. The automobile has not lead to the extinction of the horse. It has filled the horse's economic role but not their companionate role. Humans care about horses for their own sake, protect them, feed them, maintain their health and join their hooves with metal to build upon their natural propensities. If computers value humans for our own sakes our future can be a far greater improvement over our past than is the life of a well cared for domestic horse over dry and fly bitten savannas filled with implacable predators. If they don't... well, you could merge the CPU of your 286 with contemporary machines, but why would you ever bother to?

AF: What is your favorite technology invented in the last decade and why?

Vassar: Hmm. I haven't used many technologies invented in the last decade, though all of the high tech stuff I use today is a lot better than the versions that existed a decade ago. Of the technologies that became widespread in the last decade, Google search obviously wins. Among those that became ubiquitous it's the cell phone. Cellular telephony was the most rapidly transformative innovation ever once it hit the mainstream, but the S-curve for its adoption meant that it was possible to see it coming decades in advance. In the next decade, I expect e-paper and RFID to be big, especially the former, but today I only use the latter occasionally and the former almost never. There are always new medical and energy technologies in the works, but we don't notice the former unless we are sick or the latter at all. I'm fairly hopeful about regenerative medicine in the next decade based on work over the last decade and I'd be very surprised if the recent downward trend in heart disease didn't continue. There are a couple promising approaches to cancer and Alzheimers that could create similar trends starting in the next decade but it will probably take a lot longer before they become ubiquitous.

AF: When will SIAI start its AI project?

Vassar: AGI is basic science, not R&D. SIAI already has, as mentioned above, three researchers working full time on math and philosophy problems relevant to AGI. In the summers we train undergraduate students in some of what we and others have learned, and by Fall 2010 if not earlier we hope to be funding graduate students to work on AGI research projects. The intention is to fund research along lines that will contribute to analytically comprehensible and thus potentially safe AGI. Fortunately, such research is also more mathematically elegant and intellectually engaging than much that goes on in AI, so we believe that we will have an advantage in attracting the best graduate students to such work once it is funded. Continuing our educational theme with a larger audience, the blog Overcoming Bias and its successor Less Wrong are largely an attempt to enable in humans the AI epistemology that Eliezer Yudkowsky developed in the process of developing a generalized understanding of intelligence.

AF: What do you think of the idea of using online virtual worlds as a place to raise and develop AIs?

Vassar: If we create AIs with human-like cognitive architectures, they will definitely need human-like sensory environments containing virtual bodies complex enough to promote cognitive development. This isn't a very scientifically novel idea, but no one has really made either avatars or virtual worlds with close to the required complexity and it's a very big job. Fortunately, since it's not dependent on any exotic mathematical insights, a large community of volunteers can contribute to our work on this. Even if AIs don't end up using human-like cognitive architectures such worlds may be useful for them and to us.

AF: What do you think is the relative difficulty of substantially enhancing human intelligence via brain-computer interfaces or biotech approaches vs. creating AGI?

Vassar: Biotech approaches to increasing human intelligence seem to be a sure thing in a sense that AGI is not, but the time-frame, expense, and delay of such an approach mean that it probably remains decades away. The world community is also likely to be much more sensitive to ethical issues raised by biotech than even by much more serious ethical issues if the relevant technology is computational. For instance, a lab that tested whether drugs, genetic modifications or infusions of stem cells could be used to increase the intelligence of chimpanzees to human levels would be subject to severe ethical criticism but a project that used evolutionary algorithms to try to evolve a human level AI from a chimpanzee level AI would be much less criticized, even if the latter involved creating and killing billions of simulated organisms (killing billions to produce a tiny change being what natural selection does).

Given SIAI's limited resources relative to national scientific research institutions, we intend to leave biotech and neurotech approaches to others, except possibly in filling the role of an occasional outsider critic of ethically troubling research. When building software to model likely technological dynamics however and thus to better predict time to singularity, feedback from biotech and neurotech will be treated as major reasons to expect technological acceleration, especially when one looks beyond the next couple decades.

AF: What plans do you have for SIAI over the next couple years?

Vassar: For the last few years SIAI has been heavily focused on developing rationality training materials and an online rationalist community. In the next few months we should see if the community can survive on its own. Some of the materials will be made into a book, which will hopefully contribute to the training of young thinkers. Developing better employee recruitment and training techniques will be a related focus of effort more directly applicable to our goal of creating Friendly AI.

I want to institute a general change in SIAI's direction over the next year. It is my intention to bring to the forefront a number of technology related ethical concerns that go beyond SIAI's traditional focus on unfriendly AI. Another important change is that I expect to use our community to launch a large number of small to mid-sized science projects, build futurism and catastrophic risk analysis tools and collaborate more closely with academia.

Other near term efforts will relate to expanding awareness of the Singularity, existential risk and rationality on the East Coast and in Europe, and increasing the scale of the annual Singularity Summit.

AF: Why should someone regard SIAI as a serious contender in AGI?

Vassar: The single biggest reason is that so few people are even working towards AGI. Of those who are, most are cranks of one sort or another. Among the remainder, there is a noticeable but gradual ongoing shift in the direction of provability, mathematical rigor, transparency, clear designer epistemology and the like, for instance in the work of Marcus Hutter and Shane Legg. To the extent that SIAI research and education efforts contribute to rigorous assurance of safety in the first powerful AGIs, that is a victory as great as the creation of AGI by our own researchers.

A secondary reason is that we can do better than academia at making effective use of extremely intelligent nonconformists, the category of person from which almost all really radical innovations emerge. The average level of ability among our researchers may not be higher than that among professors at the best research universities, but their focus is. Within academia, junior faculty must divide their time between teaching, bureaucratic committee work or grant writing, and 'safe' research that has a high probability of contributing to a tenure case. Focus on a high risk, pathbreaking research agenda must often wait until after tenure, but psychological research (see, e.g. Richard Nisbett's recent book) indicates that fluid intelligence, the ability to solve novel problems and acquire new skills, peaks in the late 20s and declines thereafter.

Filed under: SIAI, singularity 37 Comments
13Apr/0920

Wikipedia on Me

While reviewing the Lifeboat Foundation page on Wikipedia, I noticed that someone put up a slightly shoddy Wikipedia article on me recently that has this flattering opener:

A well known and often quoted transhumanist, singularitarian and moderately extropian blogger regularly publishing his views and insights on the blog Accelerating Future. His blog has recently become more visited than several major blogs casting Michael headfirst into transhumanist celebrity status, and his posts are now widely regarded as canon for the movement.

Makes me sound alright, but it's slightly silly. "Headfirst into transhumanist celebrity status" especially causes snickering, and I'll address that below.

Clarification: I don't self-identify as "extropian", even though I have many extropian friends and think that Max More and Natasha Vita-More are great and fun people to be around. I think "transhumanism" and "singularitarian" are obscure enough self-labels as it is. If you give yourself too many niche labels, it's like jumping up and down and saying, "legitimate publications, please never do an article on me!" Still, I found the Extropian Principles to be an inspiring document when I read it, and my first exposure to movement transhumanism (in 2001) was through Extropy.org, though I quickly found a bunch of other sites.

Back to the Wikipedia article, it's tagged as lacking references from reliable third-party publications. Theoretically, I guess it might be possible to drag together an article based on the only two mainstream publications that have referenced me or my blog: Psychology Today and Attack of the Show, but that isn't much. It's probably best to wait until more third-party publications decide to do write-ups on Accelerating Future. Otherwise it will just piss off the Wikipedia editors, which will fight to prevent there being an article on me even when and if I do gain a higher profile.

"Transhumanist celebrity", that's a cute line, and I appreciate everyone who reads this blog, but a "transhumanist celebrity" is practically a nobody in the wider world. References like this only reinforce the notion that transhumanism is a marginal navel-gazing subculture with little thought of the mainstream. Instead of marginalizing ourselves, we have to engage in mainstream policy discussions and philosophical discourses, and face the truth that we are a small movement with very limited resources. Otherwise, transhumanism is no different than thousands of other minor echo chamber-like philosophical schools.

Also, the info in general in that Wikipedia article is a bit old. My up-to-date bio is here.

Another point. Punditry is well and good, but useless if it has no impact on research that can actually help people. This is why the most successful branches of transhumanism are working on focused projects and do not actually market themselves as "transhumanist" at all. There is a definitely crossover between punditry and helpful research, however, so it's not all clear-cut. Many of the heroes of the transhumanist movement are not visible pundits but researchers in AI, nanotechnology, and life extension that are actually doing the hard work of experimentation and engineering. My role is primarily that of a philosophical synthesizer, technology reporter, analyst, and communicator. I exist mostly to broadcast the work of useful researchers (who oftentimes lack skills for communicating to a wide audience) to the public, and hope that I can inspire interest that provides them with funding. They are the real heroes. Talented scientists, mathematicians, entrepreneurs, technology angels, and inventors.

Filed under: me, transhumanism 20 Comments
7Apr/0947

AGI Discussion

Discussion of AGI ethics at Convergence 2008. From left going clockwise, dunno, Anna Salamon, Peter Voss, Matt Bamberger, dunno, Steve Rayhawk, dunno, dunno, me, Brad Templeton.

Filed under: AI, images 47 Comments
7Apr/096

Wikipedia’s Friendly AI Entry is Actually Good

At some point, someone competent updated the Friendly AI page on Wikipedia and now it serves as a great summary of what this is all about:

Many experts have argued that AI systems with goals that are not perfectly identical to or very closely aligned with our own are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. Decades ago, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students that any truly alien mind, to include machine minds, was unknowable and therefore dangerous. More recently, Eliezer Yudkowsky has called for the creation of “Friendly AI” to mitigate the existential threat of hostile intelligences. Stephen Omohundro argues that all advanced AI systems will, unless explicitly counteracted, exhibit a number of basic drives/tendencies/desires because of the intrinsic nature of goal-driven systems and that these drives will, “without special precautions”, cause the AI to act in ways that range from the disobedient to the dangerously unethical.

According to the proponents of Friendliness, the goals of future AIs will be more arbitrary and alien than commonly depicted in science fiction and earlier futurist speculation, in which AIs are often anthropomorphised and assumed to share universal human modes of thought. Because AI is not guaranteed to see the "obvious" aspects of morality and sensibility that most humans see so effortlessly, the theory goes, AIs with intelligences or at least physical capabilities greater than our own may concern themselves with endeavours that humans would see as pointless or even laughably bizarre. One example Yudkowsky provides is that of an AI initially designed to solve the Riemann hypothesis, which, upon being upgraded or upgrading itself with superhuman intelligence, tries to develop molecular nanotechnology because it wants to convert all matter in the Solar System into computing material to solve the problem, killing the humans who asked the question. For humans, this would seem ridiculously absurd, but as Friendliness theory stresses, this is only because we evolved to have certain instinctive sensibilities which an artificial intelligence, not sharing our evolutionary history, may not necessarily comprehend unless we design it to.

Meanwhile, most contemporary futurists and many readers of this blog see AI as likely to share universal human modes of thought, because that's all they've seen in science fiction. Their conception of alien minds is based on fantasy rather than cognitive science. They either dismiss the possibility of AI because they like to view general intelligence as mystical or implausibly complex (Hofstadter), or they think everything will work itself out because "optimism" is the best policy for domains in which we lack understanding (Kurzweil).

Filed under: friendly ai 6 Comments