The Archaism of the Old Rich

To amuse myself with some light reading after the obscenely lengthy Golden Bough, I’m going through Class: A Guide Through the American Status System by Paul Fussell, a really amusing book. Here’s a recent review by The Atlantic.

Reading a passage in the book confirms for me what I’ve had a feeling for all along: the upper classes (and would-be upper classes) have a distinct antipathy for thinking about the future. Here’s the passage:

We’ve already seen that organic materials like wool and wood outrank man-made, like nylon and Formica, and in that superiority lurks the principle of archaism as well, nylon and Formica being nothing if not up-to-date. There seems a general agreement, even if often unconscious, that archaism confers class. Thus the middle class’s choice of “colonial” or “Cape Cod” houses. Thus one reason Britain and Europe still, to Americans, have class. Thus one reason why inheritance and “old money” are such important class principles. Thus the practice among top-out-of-sight and upper classes of costuming their servants in some archaic livery, even such survivals …

Read More

Nuclear Weapon UAVs

It isn’t mentioned often, but there is another dimension to the nuclear threat that could become real within 10-20 years — miniaturization of nuclear weapons continuing to the point where a nuclear weapon consists of several UAVs that converge on a location, assemble into a complete bomb, and detonate. You could use redundancy to ameliorate the risk of one of the UAVs getting shot down.

There are numerous strategic/military advantages which give this weapon a high probability of eventual development. Obviously, you would avoid using a missile, which shows up pretty definitively on a radar screen. For a first strike, this is tremendously important. Another advantage could be self-detonation in the event of discovery, something difficult to implement with conventional missiles.

Update: this technology would have a significant advantage over using UAVs alone because the warhead that could fit on a single UAV would have to be very small, and would have frustratingly low yield. A warhead built from converging components could have arbitrary yield, while retaining the stealth benefits of UAVs.

Read More

How to Sign Up for Cryonics

So easy… just sign up for a quote at Rudi Hoffman’s website. Rudi takes care of more than 90% of the life insurance for cryonics market. For most people, monthly payments for cryonics-dedicated life insurance policies are very cheap. “Less than the cost of an ice cream cone a day”, as someone recently put it in an article on cryonics in the Daily Mail.

Update: Rudi is authorized for selling life insurance in the USA only, but you can get similar low prices around the world.

I also realized that there is an amusing double meaning on the home page: “You will enjoy a sense of clarity and accomplishment as we comfortably help you crystallize and move towards your goals and dreams.” (Emphasis added.) Comfortably help us crystallize, huh? :)

Read More

Aubrey de Grey on the Immortality Institute’s Sunday Evening Update

The Immortality Institute (ImmInst) is a grassroots life extension advocacy organization that I co-founded in 2002 with Bruce Klein and Susan Fonseca-Klein. On Sunday, the Executive Director of ImmInst, Justin Loew, will interview the SENS Foundation‘s Aubrey de Grey on his weekly live update. The show will include a live video feed of Loew as he speaks to Aubrey via audio.

Loew says:

One of the first topics I will want to delve into is the recent restructuring of the Methuselah Foundation – split into 2 entities.

As always, whether research or outreach related, please list questions for Aubrey here in the forum so we can compile a list for the show.

If you’re interested in asking Aubrey a question, register for the ImmInst forums and post your question in that thread.

Also, Loew says:

I will want to get a sense of how things have progressed over the last few years. Aubrey’s ideas have been around a while and MF has grown quite a bit. What have been and continue to be …

Read More

50 Years of Stupid Grammar Advice

Continuing with the theme that Michael Vassar mentioned in our interview, that “collective wisdom” is really wrong about a whole heck of a lot, and that we should doubt the basic sanity of the world, Robin Hanson links an article in The Chronicle of Higher Education, “50 Years of Stupid Grammar Advice”, that completely trashes The Elements of Style by Strunk and White, long considered the Bible of writing and grammar. Every serious writer is supposed to have it.

It opens thus:

April 16 is the 50th anniversary of the publication of a little book that is loved and admired throughout American academe. Celebrations, readings, and toasts are being held, and a commemorative edition has been released. I won’t be celebrating.

The Elements of Style does not deserve the enormous esteem in which it is held by American college graduates. Its advice ranges from limp platitudes to inconsistent nonsense. Its enormous influence has not improved American students’ grasp of English grammar; it has significantly degraded it.

The author, Geoffrey K. Pullum, is head …

Read More

Eurekalert: How to deflect asteroids and save the Earth

Here’s a nicely worded press release that touts research into asteroid deflection:

You may want to thank David French in advance. Because, in the event that a comet or asteroid comes hurtling toward Earth, he may be the guy responsible for saving the entire planet.

French, a doctoral candidate in aerospace engineering at North Carolina State University, has determined a way to effectively divert asteroids and other threatening objects from impacting Earth by attaching a long tether and ballast to the incoming object. By attaching the ballast, French explains, “you change the object’s center of mass, effectively changing the object’s orbit and allowing it to pass by the Earth, rather than impacting it.”

Sound far-fetched? NASA’s Near Earth Object Program has identified more than 1,000 “potentially hazardous asteroids” and they are finding more all the time. “While none of these objects is currently projected to hit Earth in the near future, slight changes in the orbits of these bodies, which could be caused by the gravitational pull of other objects, push from the solar wind, or some other …

Read More

Molecular Manufacturing on Fox News

Michio Kaku, who qualifies as a superlative futurist if there ever was one, discussing technologies like time machines that most transhumanists consider implausible, recently went on Fox News to talk about molecular manufacturing and the “Second Industrial Revolution” (actually, there already was a Second Industrial Revolution, by some accounts). Note how even the Wikipedia page for “Second Industrial Revolution” has a mention of molecular manufacturing;

At the start of the 21st century the term “second industrial revolution” has been used to describe the anticipated effects of hypothetical molecular nanotechnology systems upon society. In this more recent scenario, the nanofactory would render the majority of today’s modern manufacturing processes obsolete, transforming all facets of the modern economy.

Here is the quote from the interview where Kaku mentions MM:

It could create a second industrial revolution. The first industrial revolution was based on mass production of large machines. The second industrial revolution could be molecular manufacture. We’re talking about a new way of manufacturing almost everything. Instead of having robots that are gigantic and clumsy, you now have …

Read More

Interview with Singularity Institute President Michael Vassar

Michael Vassar was recently appointed as President of the Singularity Institute for Artificial Intelligence (SIAI), an organization devoted to advocacy and research for safe advanced AI. On a recent visit to the Bay Area from New York, Michael sat down with me in my San Francisco apartment to talk about the Singularity and the future of SIAI.

Accelerating Future: What is the Singularity Institute for?

Michael Vassar: Sooner or later, if humanity survives long enough, someone will create human-level artificial intelligence. After that, the future will depend on exactly what kind of AI was created, with what exact long-term goals. The Singularity Institute’s aim is to ensure that the first artificial intelligences powerful enough to matter will steer the future in good directions and not bad ones. Put more technically, the Singularity Institute exists to promote the development of a precise and rigorous mathematical theory of goal systems — a theory well enough founded that we can make something smarter and more powerful than we are while still knowing it will create good outcomes. This …

Read More

Wikipedia on Me

While reviewing the Lifeboat Foundation page on Wikipedia, I noticed that someone put up a slightly shoddy Wikipedia article on me recently that has this flattering opener:

A well known and often quoted transhumanist, singularitarian and moderately extropian blogger regularly publishing his views and insights on the blog Accelerating Future. His blog has recently become more visited than several major blogs casting Michael headfirst into transhumanist celebrity status, and his posts are now widely regarded as canon for the movement.

Makes me sound alright, but it’s slightly silly. “Headfirst into transhumanist celebrity status” especially causes snickering, and I’ll address that below.

Clarification: I don’t self-identify as “extropian”, even though I have many extropian friends and think that Max More and Natasha Vita-More are great and fun people to be around. I think “transhumanism” and “singularitarian” are obscure enough self-labels as it is. If you give yourself too many niche labels, it’s like jumping up and down and saying, “legitimate publications, please never do an article on me!” Still, I found the Extropian Principles to be an …

Read More

Wikipedia’s Friendly AI Entry is Actually Good

At some point, someone competent updated the Friendly AI page on Wikipedia and now it serves as a great summary of what this is all about:

Many experts have argued that AI systems with goals that are not perfectly identical to or very closely aligned with our own are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. Decades ago, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students that any truly alien mind, to include machine minds, was unknowable and therefore dangerous. More recently, Eliezer Yudkowsky has called for the creation of “Friendly AI” to mitigate the existential threat of hostile intelligences. Stephen Omohundro argues that all advanced AI systems will, unless explicitly counteracted, exhibit a number of basic drives/tendencies/desires because of the intrinsic nature of goal-driven systems and that these drives will, “without special precautions”, cause the AI to act in ways that range from the disobedient to the dangerously unethical.

According to the proponents of Friendliness, the goals of future AIs will be more arbitrary and alien …

Read More