On Saturday, Eliezer Yudkowsky, Research Fellow at the Singularity Institute for Artificial Intelligence (SIAI), talked to John Horgan, science writer and author of works like Rational Mysticism and a recent piece in the IEEE Spectrum critical of near-term AI. The video discussion took place on Bloggingheads.tv, a video site co-founded by Robert Wright, author of Nonzero and The Moral Animal.
Some of the interview is funny and light-hearted. But overall, I thought this one had major problems. They talk past each other, and invest insufficient effort in directly addressing each other’s concerns.
Horgan thinks that those working towards human-equivalent AI are loonies and essentially religious, and Yudkowsky goes off on tangents and rationality sermons far more frequently than is appropriate. On the SIAI blog entry regarding the interview, Horgan says, in reference to the possibility of talking with other people from the organization, “Iâ€™m sure we can have a more coherent, constructive conversation than the one between me and Eliezer”. Translation: the interview was incoherent and unconstructive.
Summary of first twenty minutes:
0:00 – 1:00 Introduction
1:00 – 3:00 Eliezer’s childhood
4:00 – 6:00 How was he exposed to the Singularity idea?
6:00 – 7:00 Is the Singularity something that will happen or should happen?
7:00 – 9:00 Eliezer’s life history in the teenage years and early 20s
9:00 – 11:00 What did Eliezer teach himself to become an AI researcher?
11:00 – 15:00 How was SIAI founded?
15:00 – 18:00 Which vision of the Singularity is SIAI associated with?
18:00 – 20:00 Yudkowsky discusses Kurzweil and his conception of the Singularity.
The trainwreck begins with the way Eliezer phrases his childhood experience when asked. When asked if he had an interest in science and philosophy, he says “I was a bit too bright as a kid. Fairly well-known syndrome. Most of my intelligent conversations were with books because the adults weren’t interested in talking to me and the kids couldn’t keep up.” At this point, the empathy with 95% of the audience is immediately severed. Even though I went through a similar experience, and many intelligent people have, it’s memetic suicide to call attention to it, because it sounds like bragging.
Maybe Eliezer underestimates the sensitivity of human culture to bragging. The reason why bragging is so despised is that it’s often highly correlated to overconfidence, disregard for others, and other negative personality characteristics. Now, I don’t mean to say that Eliezer is overconfident or has a disregard for others. But he should be smart enough to realize that most people are totally insecure and hate to hear other people say anything that remotely sounds like bragging. In a typical conversation, you’re maybe allowed to brag about one thing for 3-5 seconds, and that’s it. Otherwise it sets off alarm bells that say the other person is a jerk, whether they really are or not. That is social reality.
In response to Horgan’s question about his childhood interest in science, Eliezer also says, “Interest in science somehow doesn’t sound extreme enough”. This is funny and I can identify as well! More light-hearted and interesting stuff about Eliezer’s childhood follows this for a few minutes.
Then, Eliezer explains the concept of a Vingean Singularity. Horgan doesn’t seem to get it. When confronted with the idea and asked to describe how he reacted to it, Eliezer says “it just seemed so obviously correct”. This is another example of Eliezer being excessively honest in his response instead of formulating a response in a way that would maintain empathy with his interviewer and the audience, and establish stepping stones for future understanding. You thought it was obviously correct right away — great! These guys don’t, and they just feel alienated when you tell them that you suddenly saw it as so obviously correct. It reinforces the “elitist egghead” stereotype that we have every reason to avoid.
Next, when asked if he thinks the singularity is inevitable, Eliezer says how he initially ignored the possibility of x-risk getting in the way, then eventually started taking it into account. Still, this makes it look like he consider the singularity entirely inevitable if humanity doesn’t wipe itself out, and the casual matter-of-fact way he says it continues widen the communication gap between him and Horgan, who is obviously not so sure.
Later, Horgan struggles to pronounce “singularitarian”. Sing-ul-ar-it-ar-ian. If you can say them one at a time, you can say them all at once! I realize the word is difficult, and empathize with Horgan. I prefer the term “intelligence enhancement advocate” myself. I sometimes worry that critics of intelligence enhancement advocacy like to latch on to the oddness of the word “singularitarian” and use it as a tool to show how those enthusiastic about the near-term future of AI are dyed-in-the-wool batshit crazy. I don’t think that’s what Horgan is doing here, but I can only imagine he would be tempted.
Next, Eliezer says the human brain has a messed up architecture. This is true (“haphazard” or “suboptimal” (which he uses later) are better terms, less value-laden), but the matter-of-fact way he presents it is extremely distracting, unsubtle, and jarring to the average listener. It damages his credibility. He talks as if you study enough cognitive science, it immediately becomes clear that the brain is “messed up”, but guess what — there are cognitive scientists out there who know plenty about the brain and still treat it as an act of God, an elegant machine that was purposefully designed.
For info on how the human brain has major problems, see Kluge: the Haphazard Construction of the Human Mind by Gary Marcus. Eliezer could do himself a huge favor if he pointed to well-established sources in making his more controversial-sounding claims. Otherwise, the audience gets suspicious that he is a crackpot with wild ideas. Now, it so happens that the notion that the human mind has a haphazard construction is gaining wide currency among cognitive scientists, but your typical Internet intellectual may not know this. In fact, they might get pissed off if you present it in a totally non-subtle way, as Eliezer does in every interview. In every interview, the strength of the way he puts that is very distracting, both to the interviewer and the audience.
For an example of how the human brain is suboptimal, Eliezer points to the fact that neurons are way slower than transistors. But wait — this is a bad example, because many people are doubtful that minds can be made out of silicon, even in principle. Far better examples come from the heuristics and biases literature, which immune systematic flaws in human reasoning without invoking arguments over the plausibility of arranging transistors into minds. I thought that was what he would use to give examples, and was disappointed he used the controversial transistor reference.
Next, he talks about how SIAI was founded and the progression of his attitude towards the problem of AGI. This is interesting stuff if you haven’t heard it all before.
Horgan plugs the IEEE special issue on the Singularity that I’ve been responding to. He says some of the articles are very positive, and others, like his own, are critical. He says he likes the “who’s who in the singularity” chart. As far as I can tell, the vast majority of articles are negative. An article about how some cognitive scientist is creating a model of the brain, written by an IEET intern, is not a “positive article”. This is fluff, used because they either couldn’t find or didn’t want to include a genuinely pro-Singularity article. Next time, invite me to contribute.
Next question: which vision of the Singularity is SIAI associated with? Good answers by Yudkowsky. The paper he’s thinking of is “Speculations Concerning the First Ultraintelligent Machine”. Apparently it isn’t online. I thought I had a copy and uploaded it somewhere to this domain, but can’t find it. Oh well.
Horgan brings up how Kurzweil links together the “singularity” with immortality. Yudkowsky responds well again. Kurzweil over-relies on Moore’s law graphs. Computational improvement doesn’t even speed up when the smarter-than-human intelligence barrier is broken, he sees a million times human computing power as equivalent to a million times human intelligence.
Horgan points out that Kurzweil is vague about how a Singularity transition would happen in his vision of it. Yudkowsky uses his usual talking points around emphasizing intelligence (cognitive skills, for those of you that equate intelligence with book smarts) as a critical quantity in the coming transition.
Later on, Horgan expresses skepticism about AI based on claims of the past. He is answered with more tangents on rationality that don’t address his central concerns in a straightforward way. Horgan’s general argument is this: they promised us AI in the 60s, they didn’t give it to us, therefore, it won’t happen in the foreseeable future.
I’m not going to summarize the rest point-by-point, as it was frustrating enough watching it the first time. In any case, if you have an hour to spend, check out the video.