Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

14Jan/109

Foresight 2010: the Synergy of Molecular Manufacturing and AGI

I will be speaking at Foresight 2010 this weekend in Palo Alto. My presentation, "Don't Fear the Singularity, but Be Careful: Friendly AI Design" will be both exciting and awesome. You can register here.

If you can't make it out to Palo Alto, the whole thing will be streamed live by TechZulu, which did the same for the recent H+ Summit. Here is their Alexa data for reference.

Hod Lipson of "computer program self-discovers the laws of physics" fame will be there, along with familiar faces and names such as Rob Freitas, Ralph Merkle, Robin Hanson, Paul Saffo, David Friedman, Brian Wang, and Monica Anderson. Salim Ismail, Executive Director of Singularity University, will speak late on Sunday.

Filed under: meta 9 Comments
9Nov/0962

Commenting Matters

Did you know that David Chalmers commented on my recent post analyzing my critique of him? True fact. Also see my recent comment to Vladimir Nesov on why psychedelics can be useful for philosophy, though Vlad wasn't convinced in the end, and I will be thinking about his points.

You know, anyone can comment on these posts, but I made it a bit more difficult recently because I thought I was getting too many dumb and not-well-thought-out comments. Comments reflect on the blog as a whole. It used to be that all my comments were good, then the blog got more popular, then bad comments started sneaking in. Once people start making short, snide, and stupid comments, it's a runaway effect where everyone feels they should make them, and the whole comments section goes to hell, like it has on every mainstream site.

I wanted to tell anyone how to register for the site, so you can comment. Just visit this link and you can create a registration specifically for this site, or log in with your Open ID. Then you can comment. However, I will reserve the right to delete your comments and/or ban you if I think they contribute to the entropic degradation of the comments section. Cheers, and happy complex thinking!

Filed under: meta 62 Comments
28Jun/090

Guest Post at George’s on Gaianism

I wrote a post at George Dvorsky's called "Dismiss Gaianism", where I cantankerously dismiss naive environmentalism and go on a bit about what I think would actually help the environment. I had an excuse to post an image of the Mana Tree, which is always awesome. I am in a rainforest obsession phase.

Filed under: meta No Comments
9Jun/09Off

Guest Blogging at Sentient Developments

I am guest blogging over at George's place. I will contribute 5 posts over the course of this month.

I started off with a post on animal welfare, and will probably devote a couple more posts to the topic over the next few weeks. I would write more about it here, but the near-universal disregard for animal welfare and hyper-masculine meat enthusiasm is so appalling that my faith in humanity's essential goodness is severely shaken when I read comments on it.

Filed under: meta Comments Off
18May/091

Accelerating Future on Twitter, Blog Posts by Category

I've been on Twitter for a while, you may have noticed the link in my blogroll. Also, my blog posts by category are now available, in two formats:

http://acceleratingfuture.com/michael/afblogindexbycategory.html
http://acceleratingfuture.com/michael/afblogindexbycategoryv2.html

Thanks again to Peer for setting these up.

Filed under: meta 1 Comment
13May/0910

Bookstore Open

I created an Amazon bookstore for this blog, which has some of my favorite books and a few I haven't read but have heard good things about and will read soon. (Like the Hume.)

As far as books you might not have, Moral Machines is really good, as is House of Cards, which gives psychology and psychotherapy a spanking. Robyn Dawes is one of my favorite thinkers in the field of heuristics and biases. I will never forget how impressed I was when I read his paper about how simple mathematical models outperformed doctors on diagnosing a variety of medical problems. In the experiments, doctors were consistently overconfident and overweighted certain variables even they knew very well those variables were not that relevant. It also changed my opinion about the feasibility of AI -- most people are not aware of this literature and would probably think that a multi-exabyte human would routinely outperform a kilobyte-sized model, though that isn't the case in many domains.

In my opinion, the field of heuristics and biases is so relevant to AI research that I find it difficult to take any AI researcher seriously that isn't at least somewhat familiar with it. Also in the field of heuristics and biases, I recommend Simple Heuristics That Make Us Smart, and it's interesting how the presentation of heuristics and biases here varies here from the traditional pessimism about humans in much of that literature. Instead of emphasizing where heuristics go wrong, Gigerenzer emphasizes the success of simple models. But why do we consistently use kilobyte strategies in our exabyte brains?

I put Enough and Beyond Therapy on there because they're the gold standard in transhumanist criticisms. A one-two punch, from the left and right, if you will. I think most people will find that the transhumanist philosophy still stands very resolutely and easily after these token assaults.

I put The Golden Age on there because it's my favorite sci-fi book. I read it the summer before my first transhumanist conference (Transvision 2003), where I gave my first talk (on the Singularity, of course -- feel free to give a shout-out if you were in the audience) and met many people that I still associate with, like Aubrey, Eliezer, James H., Michael Vassar, et al., so some sentimentality is associated with it in my mind.

Filed under: meta 10 Comments
9May/095

Accelerating Future Master List

Here is a list of every blog post I've made since this blog was founded. Here's another version with different formatting.

Thanks to Peer Infinity for compiling the list.

Update: by category.

Filed under: meta 5 Comments
17Apr/090

Accelerating Future on Facebook

Follow this blog on Facebook using the NetworkedBlogs app. Thanks for reading!

Filed under: meta No Comments
14Jan/093

The Accelerating Future Family of Sites

Did you know? Accelerating Future is not just this blog where I rant about futuristic topics, it is a domain... a domain of several interesting blogs and sites. Blogs written by my friends Tom, Steven, and Jeriaska. Also, there's the Accelerating Future People Database, put together by Jeriaska, and a small database of papers by the intellectual powerhouse known as Michael Vassar. Other interesting things are in the works, as always, and if you want to accelerate their fruition, don't hesitate to donate by clicking the little bit of text under where it says "support" in the sidebar.

Particularly, in recent months we've seen a lot of postings by Jeriaska at the Future Current blog, including transcripts of many talks at the Global Catastrophic Risks Conference, AGI-08, Aging 2008, you name it. On the sidebar there are also links to videos of all these events. I can say with some authority that the significance of these gatherings to the future of humanity probably exceeds that of the Academy Awards, or even the MTV Music Awards. The Future Current blog was linked by Bruce Sterling over at WIRED the other day, congrats!

Filed under: meta 3 Comments
12Jun/0850

Bloggingheads.tv Interview — Horgan and Yudkowsky

On Saturday, Eliezer Yudkowsky, Research Fellow at the Singularity Institute for Artificial Intelligence (SIAI), talked to John Horgan, science writer and author of works like Rational Mysticism and a recent piece in the IEEE Spectrum critical of near-term AI. The video discussion took place on Bloggingheads.tv, a video site co-founded by Robert Wright, author of Nonzero and The Moral Animal.

Some of the interview is funny and light-hearted. But overall, I thought this one had major problems. They talk past each other, and invest insufficient effort in directly addressing each other's concerns.

Horgan thinks that those working towards human-equivalent AI are loonies and essentially religious, and Yudkowsky goes off on tangents and rationality sermons far more frequently than is appropriate. On the SIAI blog entry regarding the interview, Horgan says, in reference to the possibility of talking with other people from the organization, "I’m sure we can have a more coherent, constructive conversation than the one between me and Eliezer". Translation: the interview was incoherent and unconstructive.

Summary of first twenty minutes:

0:00 - 1:00 Introduction
1:00 - 3:00 Eliezer's childhood
4:00 - 6:00 How was he exposed to the Singularity idea?
6:00 - 7:00 Is the Singularity something that will happen or should happen?
7:00 - 9:00 Eliezer's life history in the teenage years and early 20s
9:00 - 11:00 What did Eliezer teach himself to become an AI researcher?
11:00 - 15:00 How was SIAI founded?
15:00 - 18:00 Which vision of the Singularity is SIAI associated with?
18:00 - 20:00 Yudkowsky discusses Kurzweil and his conception of the Singularity.

The trainwreck begins with the way Eliezer phrases his childhood experience when asked. When asked if he had an interest in science and philosophy, he says "I was a bit too bright as a kid. Fairly well-known syndrome. Most of my intelligent conversations were with books because the adults weren't interested in talking to me and the kids couldn't keep up." At this point, the empathy with 95% of the audience is immediately severed. Even though I went through a similar experience, and many intelligent people have, it's memetic suicide to call attention to it, because it sounds like bragging.

Maybe Eliezer underestimates the sensitivity of human culture to bragging. The reason why bragging is so despised is that it's often highly correlated to overconfidence, disregard for others, and other negative personality characteristics. Now, I don't mean to say that Eliezer is overconfident or has a disregard for others. But he should be smart enough to realize that most people are totally insecure and hate to hear other people say anything that remotely sounds like bragging. In a typical conversation, you're maybe allowed to brag about one thing for 3-5 seconds, and that's it. Otherwise it sets off alarm bells that say the other person is a jerk, whether they really are or not. That is social reality.

In response to Horgan's question about his childhood interest in science, Eliezer also says, "Interest in science somehow doesn't sound extreme enough". This is funny and I can identify as well! More light-hearted and interesting stuff about Eliezer's childhood follows this for a few minutes.

Then, Eliezer explains the concept of a Vingean Singularity. Horgan doesn't seem to get it. When confronted with the idea and asked to describe how he reacted to it, Eliezer says "it just seemed so obviously correct". This is another example of Eliezer being excessively honest in his response instead of formulating a response in a way that would maintain empathy with his interviewer and the audience, and establish stepping stones for future understanding. You thought it was obviously correct right away -- great! These guys don't, and they just feel alienated when you tell them that you suddenly saw it as so obviously correct. It reinforces the "elitist egghead" stereotype that we have every reason to avoid.

Next, when asked if he thinks the singularity is inevitable, Eliezer says how he initially ignored the possibility of x-risk getting in the way, then eventually started taking it into account. Still, this makes it look like he consider the singularity entirely inevitable if humanity doesn't wipe itself out, and the casual matter-of-fact way he says it continues widen the communication gap between him and Horgan, who is obviously not so sure.

Later, Horgan struggles to pronounce "singularitarian". Sing-ul-ar-it-ar-ian. If you can say them one at a time, you can say them all at once! I realize the word is difficult, and empathize with Horgan. I prefer the term "intelligence enhancement advocate" myself. I sometimes worry that critics of intelligence enhancement advocacy like to latch on to the oddness of the word "singularitarian" and use it as a tool to show how those enthusiastic about the near-term future of AI are dyed-in-the-wool batshit crazy. I don't think that's what Horgan is doing here, but I can only imagine he would be tempted.

Next, Eliezer says the human brain has a messed up architecture. This is true ("haphazard" or "suboptimal" (which he uses later) are better terms, less value-laden), but the matter-of-fact way he presents it is extremely distracting, unsubtle, and jarring to the average listener. It damages his credibility. He talks as if you study enough cognitive science, it immediately becomes clear that the brain is "messed up", but guess what -- there are cognitive scientists out there who know plenty about the brain and still treat it as an act of God, an elegant machine that was purposefully designed.

For info on how the human brain has major problems, see Kluge: the Haphazard Construction of the Human Mind by Gary Marcus. Eliezer could do himself a huge favor if he pointed to well-established sources in making his more controversial-sounding claims. Otherwise, the audience gets suspicious that he is a crackpot with wild ideas. Now, it so happens that the notion that the human mind has a haphazard construction is gaining wide currency among cognitive scientists, but your typical Internet intellectual may not know this. In fact, they might get pissed off if you present it in a totally non-subtle way, as Eliezer does in every interview. In every interview, the strength of the way he puts that is very distracting, both to the interviewer and the audience.

For an example of how the human brain is suboptimal, Eliezer points to the fact that neurons are way slower than transistors. But wait -- this is a bad example, because many people are doubtful that minds can be made out of silicon, even in principle. Far better examples come from the heuristics and biases literature, which immune systematic flaws in human reasoning without invoking arguments over the plausibility of arranging transistors into minds. I thought that was what he would use to give examples, and was disappointed he used the controversial transistor reference.

Next, he talks about how SIAI was founded and the progression of his attitude towards the problem of AGI. This is interesting stuff if you haven't heard it all before.

Horgan plugs the IEEE special issue on the Singularity that I've been responding to. He says some of the articles are very positive, and others, like his own, are critical. He says he likes the "who's who in the singularity" chart. As far as I can tell, the vast majority of articles are negative. An article about how some cognitive scientist is creating a model of the brain, written by an IEET intern, is not a "positive article". This is fluff, used because they either couldn't find or didn't want to include a genuinely pro-Singularity article. Next time, invite me to contribute.

Next question: which vision of the Singularity is SIAI associated with? Good answers by Yudkowsky. The paper he's thinking of is "Speculations Concerning the First Ultraintelligent Machine". Apparently it isn't online. I thought I had a copy and uploaded it somewhere to this domain, but can't find it. Oh well.

Horgan brings up how Kurzweil links together the "singularity" with immortality. Yudkowsky responds well again. Kurzweil over-relies on Moore's law graphs. Computational improvement doesn't even speed up when the smarter-than-human intelligence barrier is broken, he sees a million times human computing power as equivalent to a million times human intelligence.

Horgan points out that Kurzweil is vague about how a Singularity transition would happen in his vision of it. Yudkowsky uses his usual talking points around emphasizing intelligence (cognitive skills, for those of you that equate intelligence with book smarts) as a critical quantity in the coming transition.

Later on, Horgan expresses skepticism about AI based on claims of the past. He is answered with more tangents on rationality that don't address his central concerns in a straightforward way. Horgan's general argument is this: they promised us AI in the 60s, they didn't give it to us, therefore, it won't happen in the foreseeable future.

I'm not going to summarize the rest point-by-point, as it was frustrating enough watching it the first time. In any case, if you have an hour to spend, check out the video.

Filed under: meta 50 Comments
15May/081

San Francisco Transhumanists?

Are there any transhumanists living in San Francisco who would be willing to open up their home for a meeting of the Bay Area Transhumanist Association? Our usual gracious host is on an extended vacation. I asked the list, but no one stepped forward. We'd prefer a house or a large flat, attendance would be in the 30-50 range. My Sunset apartment is a little small for this number.

Thank you for your support. Many interesting things are in the works with transhumanists.

Email me. (Click my image on the left sidebar for email.) We can always meet ahead of time for coffee to get acquainted.

Filed under: meta 1 Comment
7May/0812

Accelerating Future on Attack of the Show

Andres Colon of Thoughtware.tv just wrote to inform me that this blog was covered on Tele-Vision, specifically G4.TV's Attack of the Show. I didn't catch it, so use this thread as a place to post your information or reactions if you did.

First TV coverage ever! Yay!

Here it is:

Brief mention, but still. TeeVee! Can you spot the images from my blog integrated into the starting collage? Can you? My images are there! You see them!

I love how it plays the rock-y music as it zooms in on my boring-ass header. (I kinda like it that way, to scare away people just looking for entertainment.) "Intellectual" site -- hm, that's interesting. Aerogel is not a metal, lol. It's practically the physical opposite of a metal -- setting records for low density and its insulation ability. Maybe he meant material.

The coolest thing in the clip is the collage at the beginning where it shows the mecha I posted. Did anyone else just laugh at this? TV is so funny, the way the announcer enunciates things he says in that standard television manner, with all the pseudo-relevant stock footage running in the background, zooming in-and-out effects, blah blah blah. (I get zero channels on my television, its only purpose is for playing the occasional RPG or fighting game.)

No mention of transhumanism. :( Oh well, my long-running strategy of posting interesting futurist material to lure people to read about H+ has already worked well, anyway.

~~~

Your blog has appeared on G4.TV.

+300,000 event points
+5000 traffic
+1000 geek cred
+10 incoming links
2 levels up

You received a clip on Thoughtware.tv
Michael learned Mock Television
*commence repetitive rejoice sequence*

~~~

Now might be a good time to link my popular posts from 2008 thus far:

10 Futuristic Materials
Five Ways You Can Help Transhumanism Now
Five Futuristic Forms of Air Travel
Feasibility Arguments for Molecular Nanotechnology
A Challenger Appears!
Interview with Future Blogger
Transhumanist Blogs
Negative Utilitarianism
Seven Influential Transhumanists
Is Star Trek a Fascist Society?
Brain-Computer Interfaces for Manipulating Dreams
Boston Dynamics Big Dog
Gimme Some of That Flying Car Shit
Top 10 Excuses for Dying
Cognitive Enhancement Strategies
Vatican Takes Official Anti-Transhumanist Stance
Response to Amor Mundi on Transhumanism
Nuclear Terrorism
High Cost of Force Protection
Annalee Newitz's Vitriolic Anti-Transhumanism
Taking Global Risk and WWIII Seriously
Human Arrogance
Look to Inner, not Outer Space
The Religion of Science
Temperature Engineering
Stephen Omohundro's "Basic AI Drives"
The Danger of Powerful Computing

Filed under: meta 12 Comments