Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

12Jun/0850 Interview — Horgan and Yudkowsky

On Saturday, Eliezer Yudkowsky, Research Fellow at the Singularity Institute for Artificial Intelligence (SIAI), talked to John Horgan, science writer and author of works like Rational Mysticism and a recent piece in the IEEE Spectrum critical of near-term AI. The video discussion took place on, a video site co-founded by Robert Wright, author of Nonzero and The Moral Animal.

Some of the interview is funny and light-hearted. But overall, I thought this one had major problems. They talk past each other, and invest insufficient effort in directly addressing each other's concerns.

Horgan thinks that those working towards human-equivalent AI are loonies and essentially religious, and Yudkowsky goes off on tangents and rationality sermons far more frequently than is appropriate. On the SIAI blog entry regarding the interview, Horgan says, in reference to the possibility of talking with other people from the organization, "I’m sure we can have a more coherent, constructive conversation than the one between me and Eliezer". Translation: the interview was incoherent and unconstructive.

Summary of first twenty minutes:

0:00 - 1:00 Introduction
1:00 - 3:00 Eliezer's childhood
4:00 - 6:00 How was he exposed to the Singularity idea?
6:00 - 7:00 Is the Singularity something that will happen or should happen?
7:00 - 9:00 Eliezer's life history in the teenage years and early 20s
9:00 - 11:00 What did Eliezer teach himself to become an AI researcher?
11:00 - 15:00 How was SIAI founded?
15:00 - 18:00 Which vision of the Singularity is SIAI associated with?
18:00 - 20:00 Yudkowsky discusses Kurzweil and his conception of the Singularity.

The trainwreck begins with the way Eliezer phrases his childhood experience when asked. When asked if he had an interest in science and philosophy, he says "I was a bit too bright as a kid. Fairly well-known syndrome. Most of my intelligent conversations were with books because the adults weren't interested in talking to me and the kids couldn't keep up." At this point, the empathy with 95% of the audience is immediately severed. Even though I went through a similar experience, and many intelligent people have, it's memetic suicide to call attention to it, because it sounds like bragging.

Maybe Eliezer underestimates the sensitivity of human culture to bragging. The reason why bragging is so despised is that it's often highly correlated to overconfidence, disregard for others, and other negative personality characteristics. Now, I don't mean to say that Eliezer is overconfident or has a disregard for others. But he should be smart enough to realize that most people are totally insecure and hate to hear other people say anything that remotely sounds like bragging. In a typical conversation, you're maybe allowed to brag about one thing for 3-5 seconds, and that's it. Otherwise it sets off alarm bells that say the other person is a jerk, whether they really are or not. That is social reality.

In response to Horgan's question about his childhood interest in science, Eliezer also says, "Interest in science somehow doesn't sound extreme enough". This is funny and I can identify as well! More light-hearted and interesting stuff about Eliezer's childhood follows this for a few minutes.

Then, Eliezer explains the concept of a Vingean Singularity. Horgan doesn't seem to get it. When confronted with the idea and asked to describe how he reacted to it, Eliezer says "it just seemed so obviously correct". This is another example of Eliezer being excessively honest in his response instead of formulating a response in a way that would maintain empathy with his interviewer and the audience, and establish stepping stones for future understanding. You thought it was obviously correct right away -- great! These guys don't, and they just feel alienated when you tell them that you suddenly saw it as so obviously correct. It reinforces the "elitist egghead" stereotype that we have every reason to avoid.

Next, when asked if he thinks the singularity is inevitable, Eliezer says how he initially ignored the possibility of x-risk getting in the way, then eventually started taking it into account. Still, this makes it look like he consider the singularity entirely inevitable if humanity doesn't wipe itself out, and the casual matter-of-fact way he says it continues widen the communication gap between him and Horgan, who is obviously not so sure.

Later, Horgan struggles to pronounce "singularitarian". Sing-ul-ar-it-ar-ian. If you can say them one at a time, you can say them all at once! I realize the word is difficult, and empathize with Horgan. I prefer the term "intelligence enhancement advocate" myself. I sometimes worry that critics of intelligence enhancement advocacy like to latch on to the oddness of the word "singularitarian" and use it as a tool to show how those enthusiastic about the near-term future of AI are dyed-in-the-wool batshit crazy. I don't think that's what Horgan is doing here, but I can only imagine he would be tempted.

Next, Eliezer says the human brain has a messed up architecture. This is true ("haphazard" or "suboptimal" (which he uses later) are better terms, less value-laden), but the matter-of-fact way he presents it is extremely distracting, unsubtle, and jarring to the average listener. It damages his credibility. He talks as if you study enough cognitive science, it immediately becomes clear that the brain is "messed up", but guess what -- there are cognitive scientists out there who know plenty about the brain and still treat it as an act of God, an elegant machine that was purposefully designed.

For info on how the human brain has major problems, see Kluge: the Haphazard Construction of the Human Mind by Gary Marcus. Eliezer could do himself a huge favor if he pointed to well-established sources in making his more controversial-sounding claims. Otherwise, the audience gets suspicious that he is a crackpot with wild ideas. Now, it so happens that the notion that the human mind has a haphazard construction is gaining wide currency among cognitive scientists, but your typical Internet intellectual may not know this. In fact, they might get pissed off if you present it in a totally non-subtle way, as Eliezer does in every interview. In every interview, the strength of the way he puts that is very distracting, both to the interviewer and the audience.

For an example of how the human brain is suboptimal, Eliezer points to the fact that neurons are way slower than transistors. But wait -- this is a bad example, because many people are doubtful that minds can be made out of silicon, even in principle. Far better examples come from the heuristics and biases literature, which immune systematic flaws in human reasoning without invoking arguments over the plausibility of arranging transistors into minds. I thought that was what he would use to give examples, and was disappointed he used the controversial transistor reference.

Next, he talks about how SIAI was founded and the progression of his attitude towards the problem of AGI. This is interesting stuff if you haven't heard it all before.

Horgan plugs the IEEE special issue on the Singularity that I've been responding to. He says some of the articles are very positive, and others, like his own, are critical. He says he likes the "who's who in the singularity" chart. As far as I can tell, the vast majority of articles are negative. An article about how some cognitive scientist is creating a model of the brain, written by an IEET intern, is not a "positive article". This is fluff, used because they either couldn't find or didn't want to include a genuinely pro-Singularity article. Next time, invite me to contribute.

Next question: which vision of the Singularity is SIAI associated with? Good answers by Yudkowsky. The paper he's thinking of is "Speculations Concerning the First Ultraintelligent Machine". Apparently it isn't online. I thought I had a copy and uploaded it somewhere to this domain, but can't find it. Oh well.

Horgan brings up how Kurzweil links together the "singularity" with immortality. Yudkowsky responds well again. Kurzweil over-relies on Moore's law graphs. Computational improvement doesn't even speed up when the smarter-than-human intelligence barrier is broken, he sees a million times human computing power as equivalent to a million times human intelligence.

Horgan points out that Kurzweil is vague about how a Singularity transition would happen in his vision of it. Yudkowsky uses his usual talking points around emphasizing intelligence (cognitive skills, for those of you that equate intelligence with book smarts) as a critical quantity in the coming transition.

Later on, Horgan expresses skepticism about AI based on claims of the past. He is answered with more tangents on rationality that don't address his central concerns in a straightforward way. Horgan's general argument is this: they promised us AI in the 60s, they didn't give it to us, therefore, it won't happen in the foreseeable future.

I'm not going to summarize the rest point-by-point, as it was frustrating enough watching it the first time. In any case, if you have an hour to spend, check out the video.

Filed under: meta Leave a comment
Comments (50) Trackbacks (3)
  1. ” many people are doubtful that minds can be made out of neurons, even in principle.”

    Need you mean “…made out of silicon..”?

  2. First Michael, I GREATLY appreciate your recent posts. HUGE improvement in the blog when you write these longer articles. Anyway, on to the meat.

    I think that the reason for hostility towards bragging is to dampen the amount of false and cheap signaling, allowing it to be part of a form of valid costly signaling instead. OTOH, there’s also simple jealousy. Envy is very evolutionarily grounded. And the fact that braggarts ARE believed by some people, which can allow them to get dangerous power. Some great leaders may have been modest, but most are extreme braggarts, and great leaders are unsafe people to be around unless kept on a VERY short leash.

  3. “Horgan’s general argument is this: they promised us AI in the 60s, they didn’t give it to us, therefore, it won’t happen in the foreseeable future.”

    As long as y’all continue to mischaracterize the argument in this way you will make no headway against skeptics.

    The point is: all the wild guesses made so far about the imminence of AI have been laughably wrong. That doesn’t imply that new proclamations are necessarily wrong, but it is not unreasonable to ask for a little evidence that they aren’t just more wild guesses. Looking at the supposed arguments we see computations for “human equivalent processing power” that use something like 100 multiplies per second as the equivalent of a synapse. This is a wild and (on the face of it) stupid guess. We see assertions that we should be able to beat the brain in efficiency, with no evidence at all, not even an explanation of what the brain even does. We see stories about rapid “recursive self improvement” of almost limitless potential without even the beginnings of a theory for how this fanciful notion could actually be done in practice.

    When asked to provide some evidence for why we should pay attention to these memeplexes that people are pushing in an effort to make themselves famous, all we get in interviews is evasive tangents and insistences that the burden of proof is on the skeptics. Given that, there is no reason to suppose that the new claims have any more substance than the old ones.

    If you actually want to make a difference with most people, y’all will face this simple obvious objection head on and address it. Or just keep misinterpreting it and rolling your eyes… whichever you think will be most helpful.

  4. Yup, that’s Eli in a nutshell. It matters almost not at all how right he might be on any issue — and he’s right on a lot of them; his condescending, narcissistic, self-absorbed, self-righteous, ego-maniacal presentation will be off-putting to almost everyone with whom he’s attempting to communicate. His experiences growing up don’t excuse it; a lot of us had similar experiences.

    Somebody buy the boy a copy of Dale Carnegie, please!

  5. > Somebody buy the boy a copy of Dale Carnegie, please!

    Save your money — he won’t read it.

    Everybody **else** can simply read
    or, for an insider’s view of the condition,
    google up some of Sam Vaknin’s writing on the

    Or see

  6. ” many people are doubtful that minds can be made out of neurons, even in principle.”

    Need you mean “…made out of silicon..”?

    I think he’s talking about all the dualists out there who believe that we have both a brain and “a soul”, whatever that’s supposed to be.

  7. “he accidentally says that neurons fire at 20 Hz rather than 200 Hz”

    Neurons don’t actually inevitably fire at any one specific rate, most of the time they are not firing. When they do, their maximum is 200 HZ but they can’t keep that up for long and a more realistic maximum is 40 HZ, perhaps 20 HZ is an average.

  8. I actually meant “out of silicon”, but MGR’s point is valid as well!

    JB, you may be a bit more sensitive than most due to personal clashes with Eliezer in the past.

    Bambi, estimates of human brain processing power in the range of 10^15 and 10^18 ops/sec are quite reasonable and supported by many neuroscientists.

    Skeptics overestimate exactly how many proclamations about the imminence of AI have been made, and on which timescales. For every person proclaiming the imminence of AI, there are 10 other people saying that progress is being made but we aren’t there yet.

    Among computer scientists, cognitive scientists, and other people who look at the problem of AI, they tend to cluster into two groups: those who think AI is probably 30-50 years away, and those who believe it’s “centuries” away. From my discussions with both groups, it seems to me that the latter group has an overwhelming tendency towards an unscientific and worshipful conception of the human mind more akin to vitalism than anything else. Those that view the human mind as a mechanism, albeit a complex one, tend to think that we’ll unravel the mechanisms of intelligence sometime around the middle of this century.

    “All we get in interviews are evasive tangents” — which other interviews have you seen, bambi?

    Anyway, the number of people that consider AI possible within 50 or so years is large and growing. Maybe as large as it needs to be.

  9. “The field of artificial intelligence is a joke now”

    Horgan Vs Yudkowsky: fight!

    You’re right that they are talking across each other. Yudkowsky is obviously not particularly adept at discussion of general topics in a public forum, especially personal stuff like childhood memories. A better interviewing approach probably would have been to come up with some quite specific narrowly focussed questions.

  10. “Kurzweil over-relies on Moore’s law graphs.”

    I think that this is a bit unfair to Kurzweil. He says explicitly that Moore’s law is just one of the many manifestations of the law of accelerating returns. As for the law of accelerating returns, if you object to it you should explain why.

    As for the interview in general, I thought it was very irresponsible of Horgan to not have even read any of Eli’s papers before Eli went on. It seemed like Horgan just wanted a sacrificial lamb. Eli was simply trying to avoid having his positions be reduced to outlandish sound bites.

  11. Public relations is a skill, and not everybody has command of it to the same level. It has more to do with practice than anything else (if he expects to do more of those, maybe he should join a Toastmasters club).

    Eliezer would probably have done much better if he had kept in front of him a piece of paper that read: “Remember, you are not talking to that guy only, lots of others are listening in.”

    He would also benefit from having more ready “cached” answers. If he’s going to be a spokesman for the SIAI, he should be able to have convincing short answers immediately pop to mind for all the most frequent questions instead of having a whole complex paper that he wrote pop up, and having to say “go read my paper” because he can’t possible distill it on the fly.

    Still, Eliezer rocks and I care a lot more about his other skills than his PR-fu. It would just be the cherry on top if he went all Steve Jobs.

  12. Michael, thanks! Unfortunately, it takes a lot of time to write these.

    Yes, it seems like Horgan was sort of just trying to out Eli as some kind of nutcase. The tone with which he said “you didn’t even go to college” seems to indicate this. On the plus side, it looks like Horgan will be showing up at the next Singularity Summit, so obviously he cares about getting more exposure, even if he is extremely doubtful about the whole thing.

    Regarding Eliezer, I am one of Eli’s greatest fans, and his writings and my conversations with him have changed my thinking in numerous deep ways, many of them completely related to futurism or AI. So if I have a major problem with his presentation, I can imagine that others who don’t respect him from the get-go will be even less charitable.

    Obviously, Horgan is off-base when he says the field of AI is a joke now. Doesn’t he read the daily press releases?

    JF, Eliezer does not have narcissistic personality disorder. Given that he has devoted his life towards pursuing what he sees as the ultimate source of utilitarian value (however much you may disagree with his reasoning), it is disingenuous to argue that he lacks empathy. He could be making a lot more money for himself if he went into the commercial realm (and has the contacts to get some of the Bay Area’s best jobs), but he remains in a non-profit. As well, I am doubtful about the scientific validity of NPD as a diagnosis. More than half the human population might qualify for it.

  13. “estimates of human brain processing power in the range of 10^15 and 10^18 ops/sec are quite reasonable and supported by many neuroscientists”

    That would be a good thing to use as part of a singularitarian response to skeptics, if true. Can you point to some of the many neuroscientists saying this? Kurzweil’s book points to a small amount of research but the numbers are interpretations and extrapolations of his own, not statements by respected neuroscientists as far as I can tell.

    From your comment, if you toss out all the pessimists, the collective wisdom of actual experts in the field is that AI 30 to 50 years away. I’ll trust that you are correct about that.

  14. There are neuroscientists who support the 10^15 – 10^18 ops/sec estimate (Lloyd Watts and Anders Sandberg leap to mind, but I’ll try to compile a longer list), but many are hesitant to make any estimate whatsoever. Instead, their belief that some of the salient features of the human mind are computable with present-day computers is displayed through their research choices. For instance, Tom Griffiths, director of the Computational Cognitive Science Lab at the University of California, Berkeley:

    Regarding the recent completion of the Roadrunner supercomputer, “Alan Dix, professor of computing at Lancaster University, said that by rough calculations, Roadrunner was possibly only five to 50 times less powerful than the human brain. “Wait another three to five years and it will be there,” he said.” This is explicit.

    Google “computational cognitive psychology” and you’ll see there are hundreds of respected researchers who obviously disagree with those who argue that we’re orders of magnitude short of the computing power to simulate cognition.

  15. Next, Eliezer says the human brain has a messed up architecture. This is true (”haphazard” or “suboptimal” (which he uses later) are better terms, less value-laden), but the matter-of-fact way he presents it is extremely distracting, unsubtle, and jarring to the average listener.

    Case in point.

    We see assertions that we should be able to beat the brain in efficiency, with no evidence at all, not even an explanation of what the brain even does.

    This is the default position, no? Evolution is stupid.

    We see stories about rapid “recursive self improvement” of almost limitless potential without even the beginnings of a theory for how this fanciful notion could actually be done in practice.

    Um, LOGI? GISAI/a>?

    Anyway, you don’t have to believe AI is probably close to be a Singularitarian, just that recursive self-improvement is worth serious thought.

  16. Kurzweil refers to Watts in his book but the brain-equivalent extrapolation computation is Kurzweil’s; I’m not able to find anything about Watts himself making a brain-equivalent ops/sec estimate. Since he’s a transhumanist luminary, I’m not surprised about Sandberg, but I wonder if he has written anything about this estimate. The closest I’ve found so far is somebody writing about his Transvision 2007 talk where “2020 – 2060″ is the time range mentioned.

    Dix is not a neuroscientist.

    It would be better to not put words in people’s mouths when using them as support for a specific claim such as a numeric computing estimate.

  17. You know, I feel Eliezer Yudkowsky is being a bit too cocky here. Actually, not only just here, he generally goes around with an “I’m so smart-you’re so stupid” attitude. Now that I think about it, exactly why is Eliezer considered to be so clever and great??? What has this guy ever really done to deserve such lavish praise? Yes he can talk about all these things in a sensible manner, but has he ever produced any real scientifically sound RESULTS in his field of research??? If he really was that clever, he ought to be spending his time on doing things instead of giving patronizing interviews like this one.

  18. Read Eliezer. It you don’t come off with a sense that this is one super smart guy, you aren’t smart enough to read him anyway.

    Whether smart people call themselves smart or not, I don’t care. All I care about is that they’re smart.
    There’s no need to say it explicitly though; I know that if they’re not smart, I won’t be listening to them.

    Eliezer isn’t wrong to say he’s smart. He’s just stating a fact. If you can’t handle the truth, go handle your lies. Honesty in communication is something we as a species have to develop. We should not play games of deception and half-truthfulness to gain some petty observed benefit. We’re so used to not saying or hearing what we know is true, that often speaking the obvious truth seems like a revolutionary act. Political correctness is a prime example of this tendency taken to the extreme.

    Eliezer has no need to edit himself for the couldn’t-care-less-about-muh-brain masses or even the people in the middle rungs of the ladder.

    He’s elite for the elite.

    Editing would be a waste of time and resources. This is trying to enforce the human concept of “how to behave yourself in the public so as to maximize your popularity and/or the spread of your memeplex of choice” in the wrong place. Is he running for office? Is he trying to win a popularity contest here? No.

    The people, the public, who don’t understand people of the caliber of Eliezer, change the channel anyway. They’re not concerned, neither should Eliezer. If you’re worried that Eliezer gives SIAI or Transhumanism an elitist face, I’d say it works the other way too: if it were not for the exact comments and views Eliezer puts forth, the sheer elitism of it, I wouldn’t be interested. Point me to another Eliezer and I’ll be twice as happy.

    Why do we criticize elitism as if it’s bad? The efforts of the intellectual elite make our lives better every day. (The money/political elite is another thing.) Civilization is the result of the intellectual elite (+ manual labor of the non-elite, soon to be replaced with robotics).

    “Eliezer being this and Eliezer being that…”

    Those people aren’t much past SL0 while Eliezer has his feet solidly planted at SL4.

    Trying to edit yourself to the expectations of the SL0 public, with their huge biases and irrationalities – the human mental crud – is a waste of resources and, in fact, dishonest.

    This isn’t politics. This isn’t pandering to the lowest common denominator. This isn’t your neighborhood gossip.

    This is high power brain action. Don’t criticize it when it’s right and speaking the truth. Criticize it when it’s wrong and lying.

    Please don’t tell the Eliezer’s of the world how to edit themselves according to your human concepts. Humanity needs its Eliezer’s unadulterated, unedited to advance.

    To the detractors: thank you for not understanding. Now please leave Eliezer alone and go back to your oh-so-wonderful SL0 human world with its inconsequential goals and worries.

  19. Michael, what are your intentions here?

    Is this criticism *for* Eli?
    Criticism as a lesson to transhumanists?

  20. In this world it’s considered quite normal and healthy to be stupid. Being smart and knowing it is apparently wrong, however being beautiful and knowing it and flaunting it is apparently admirable. If Eliezer is guilty of something, it’s stating the obvious.

  21. The point here is both criticism for Eliezer, a lesson for transhumanists, and to tell people that the views expressed by Eli could be argued in a more superficially socially appealing manner.

    Yes, Eliezer is smart and he knows it. To those who already respect him, it doesn’t bother us, but to those running across him for the first time, it can be a major turn-off.

  22. All I’m asking is a simple question: On what basis are all of you calling him smart, or for that matter, what makes Eliezer think he has the right to go around calling himself smart or smarter than others? Now don’t tell me its because he says all these things about this and that, I want to know what accomplishments does this guy have to show for his supposedly great intellect? I think this is one area where the H+ community needs a major reality check. I had already anticipated the kind of responses i’ll get for posting that – just the kind you’d get from fanatics of any creed. Shock level 4 huh? Yeah Right!

  23. “All I’m asking is a simple question: On what basis are all of you calling him smart”

    Go read a few dozens of those and then make up your own mind (which is what we who think he’s smart did).

  24. Yudkowsky is a hoot. Singularity / fringe-AI discourse would be a lot more boring without him. True, he hasn’t accomplished much in the sense of solving the difficult problems he claims to be working on (friendliness, reflection, etc), but maybe he will someday.

    His writings tend to make me think seriously about my own assumptions, something people (me included) rarely actually do because it is so much more work than following familiar thought paths.

    From a sufficient distance, I much prefer a large distinctive personality to an indistinct limp one. The Yudkowsky circus is fun to watch.

    On many points I think he is spectacularly wrong, but so what? On many others he seems right — and best of all there are many points about which no easy conclusions can be drawn.

    What more do you want from one guy?

  25. what makes Eliezer think he has the right to go around calling himself smart or smarter than others?

    Thinking of the issue this way, while natural, is horribly unproductive. Calling yourself smart isn’t a reward that needs to be earned, it’s an empirical claim you can evaluate like any other.

    Now don’t tell me its because he says all these things about this and that, I want to know what accomplishments does this guy have to show for his supposedly great intellect?

    Writing is an accomplishment. Why are Einstein, Feynman, etc. considered intelligent, if not for what they wrote? (Not that I mean to claim Eliezer is an Einstein or a Feynman, of course.)

    Michael, would you mind approving my comment at #16?

  26. Blog posts? You’ve got to be freakin’ kidding me! Is that it, a bunch of blog posts, some mailing list responses and some “papers” on AI? Oh great…i’m convinced! How could I have ever doubted his brilliance!. Don’t you guys get it? Do you really think I’m trying to make the case that Eliezer is Stupid? Ofcourse Not. It’s clear that he is a sentient thinking feeling human being like all of us. But just because he wrote some blog posts and rambles on about topics most people are usually not familiar with, suddenly he becomes a genius? I could make the case that all his blogging is just mental masturbation. There are millions of bloggers all banging out post after post about all kinds of nonsense. I asked, what did he DO?

    Believe me, I very much sympathize with transhumanist values but I would expect a degree of humility from someone like Eliezer. Don’t forget, we’re all cut from the same cloth. So, going around calling other people stupid just looks childish. This is not the first time i’ve seen him do this. Anyway, I don’t want to keep harping on the same point. Since someone mentioned Feynman and Einstein, I’ll end by posting some famous quotes:

    “I think I can safely say that nobody understands quantum mechanics.” – Richard Feynman

    “How can he possibly be humble? He hasn’t done anything yet.” – Albert Einstein

    “One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important ” – Bertrand Russell

  27. hearing Yudkowsky speak on a few occasions left me with one conclusion…he should be left in the lab.

  28. Ramble: To speak or write at length and with many digressions.

    Some blog posts? Those are some blog posts, indeed.

    “Don’t forget, we’re all cut from the same cloth.”

    I keep on forgetting it all the time. All. Cut. From the. Same. Cloth. Yep. No variance in cloth quality. None. I think I got it now.

    “I don’t have any opinions anymore. All I know is that no one is better than anyone else, and everyone is the best at everything.”
    -Principal Skinner

  29. Because there are millions of bloggers, find another Eliezer, one that writes his kind of nonsense. Just one. Should be easy.

  30. It seems you don’t consider thinking as doing. What does doing mean to you? Is it making things you can see, touch and interact with? What would you need to see him do to accept he’s really really smart if not a genius? Don’t you see any [future] practical value in what he’s thinking about? Already he’s managed to make rather large changes to an important part of the physical world: the brain.

  31. “hearing Yudkowsky speak on a few occasions left me with one conclusion…he should be left in the lab.”

    Left me with one conclusion… he should have a daily radio show.

    That demotivator is hilarious. Elite job.

  32. Interesting fact: Eliezer is probably smarter than anyone posting in this thread.

    What did he do? How about realizing that Vingean superintelligence is a goal we can actually work towards in the real world, or founding the Singularity Institute, or writing an explanation of Bayes’ theorem that is the third result for the term, or being important enough to get an interview with him featured on the front page of the SF Chronicle’s business section, or doing pioneering theorizing on the application of Bayes’ theorem to normative cognition, or helping revitalize worldwide interest in AGI, or…

    If it weren’t for Eliezer, this blog you’re posting in might not even exist.

  33. as a woman I can tell you his theory on mammograms is just wrong!If he were talking about medical researchers thats one thing but he used the term doctor.mammograms are one of many diognostic tools to find breast cancer.First is the physical examination if the breast to find any lumps,then the mammograms,and if you have a lump that does not show on the mammogram maybe the breast is fibrous then there is the ultra sound and if that is inconclusive you get a biopsy.

  34. “Interesting fact: Eliezer is probably smarter than anyone posting in this thread.”

    “Probably?” The probability is a statement about your own mind, not a fact about Eliezer. You should actually read his stuff I think.

  35. Nick Tarleton:

    > This is the default position, no?

    No. One of the problems Singularitarians have in communicating with skeptics is that they seem unbelieveably credulous because their “default positions” are often seen as stupid fantasies.

    But still, I should have been clearer in communicating that intelligent people object to these ideas for a reason, not just because they are myopic morons. It may be reasonable to speculate that we could beat evolution by a small amount someday. But the claim that is supposed to make us pay attention is that we will beat evolution by A LOT, and SOON. That is what people want evidence for. Conflating a theoretical small andvantage with an imminent large advantage is exactly the loopy logic that makes skeptics laugh.

    > Um, LOGI? GISAI?

    Yes, there are some hand-wavy vague tomes like these, but they are not even close to respectable theories, they are just science fiction without characteers. Good grief, even Yudkowsky (the author of LOGI) says LOGI is not a viable path to AGI.

    > Anyway, you don’t have to believe AI is
    > probably close to be a Singularitarian,
    > just that recursive self-improvement is
    > worth serious thought.

    Sure, you can just give up and say “ok so there is little or no evidence but it would be a big deal if it were true wouldn’t it?” Gonna have to do better than that to get attention from busy people working on actually real things.

  36. truyhynesslover:

    If you’re talking about what I think you’re talking about, you have missed the point of the article. Eliezer is in no way making an argument about mammograms here. He is demonstrating Bayesian mathematics and using mammograms as an example. You may have a valid arguemnt about the accuracy of his numbers, but BASED on those numbers, he is demonstrating a highly effective method of determining probability.

  37. We beat evolution hands down with all kinds of mechanical devices. Sure, they don’t do some nice things like repair themselves, but for many of our purposes they’re vastly superior to anything living. This strongly suggests a general prior expectation that we can massively beat the human brain along some dimensions, creating something very useful and powerful, even if we have more difficulty with other dimensions. GISAI 1.1.1: The AI Advantage lists more specific reasons.

    LOGI and GISAI (and CFAI) are, as you said, “the beginnings of a theory” (moving the goalposts much?). There is actual substantial reasoning there, if you look well enough.

  38. Engineering beats evolution at extremely narrowly-measured precisely-defined tasks. The day you convince people that “general intelligence” is one of these you will have achieved something. Not being able to tell the difference between general intelligence and transportation, and then abusing reasonable generalization to invent a “prior” is also not going to convince anybody.

    As to moving the goalposts, granted I should have said “beginnings of a theory of interest to anybody except fanboys”. These “theories” don’t predict anything, are not anything near formal or even vaguely implementable, and connect to no other works (which explains why they are ignored by all but the in-group). Even their author says not to bother with them, and has abandoned work on them. I mean, is anybody on the entire planet pursuing these many-years-old “theories”?

    Now I don’t care what you guys believe; I thought this stuff might have something to it myself and really wanted to find some substance; I am not attacking anybody’s motives or psychology; I would very much like to see progress instead of PR. But I am really beginning to agree with the critics (that is, they are winning the argument in my opinion, and will continue to do so until some specific progress can be demonstrated).

    The catastrophic bloggingheads interview doesn’t help either. SIAI should really just pretend that never happened.

  39. Thanks, I believe I have seen all of those videos and have read almost everything on the AI portion of Singularity studies. I’m working through Pearl’s _Causality_ at the moment on Yudkowsky’s recommendation. Mastering that will take some time but if and when he produces something technical and concrete I’d like to be prepared enough to understand it.

    My disappointment comes from replies to critics that appear evasive and misleading, and also it seems that Singularitarians for some reason continually miss the basic points of the more thoughtful criticisms, which are pretty straightforward — instead preferring to hear and respond to caricatures.

    I’ll stop polluting your blog with such commentary though; it probably would have been enough to simply say “please try to see the points the critics are making and address them in ways they will accept”.

    I guess my current thinking is that there is no satisfying these critiques and maybe there cannot be until more progress is made, which could take one month or fifty years, and even that upper bound is more a function of my hopeful optimistic nature than hard-nosed ratiocination.

  40. Were my responses to Zorpette and Jones not addressing their concerns directly? I do all this work and try so hard and I feel like you’re ignoring it. Yudkowsky isn’t being evasive on purpose, it’s just that he likes to set out about an hour of background logic before answering any question.

  41. I do not feel knowledgeable enough to have a sufficiently grounded opinion on the feasibility or timeframe for molecular nanotechnology, so I cannot judge your or Jones’ comments on that subject. I am curious about the accuracy and efficiency of molecular simulation software models because that impacts the most radical hard takeoff scenarios, but that is irrelevant to our discussion here.

    Regarding Zorpette… the psychological attack on Singularitarians and their motives is uninteresting to me. It seems needlessly mean-spirited, which is why I gave up reading Carrico despite his sometimes insightful writing. Effectively dealing with the surprisingly crude insults is important I guess.

    For the rest of that introduction, most of it was spent introducing the authors. Of what little remains after that, I did not think your reply was evasive but it did manage to ignore the important points: complaints about gaping take-my-word-for-it extrapolations, specious reasoning, flawed grasp of neuroscience, physiology, and philosophy; unrealistically short timeframes.

    Then later, again the focus on time and urgency, its relationship to the way science and technology actually progress in the real world, and the complexity of the brain (which implies difficulties in testing models, measuring internal parameters, and simply managing such large theoretical structures). He went out of his way to grant the possibility of the enterprise, so its practicality in the near future is the issue.

    Speaking of which, your pointer did lead me to a transcript of a Sandberg talk which I had missed previously. In it he says it seems “pretty likely” that 2060 is enough time for computational requirements of brain emulation. Using other math of his (2033 as a date for 10^15 flops for $1M and 5.3 years per order of magnitude increase), that corresponds to 10^20 flops for brain emulation, although he says it could be less or possibly more.

  42. I am not a fan of Eliezer and probably never will be but I do believe that those that criticize him criticize his performance rather than his intellect. As an average mind like individual I have read almost all the literature on the sl4 as well as overcoming bias and do believe that his writings have changed my way of thinking regarding certain subjects and have had a benefit in helping me understand things from a different level. My only suggestion would be that the SIAI may look into finding a spokesperson, it’s not easy being in the public eye and if the message you want to get through is very important, you find the best means to present it properly, clearly Eliezer will not be a well liked public figure. There’s nothing wrong with being the man behind the ideas as opposed to the one presenting it.

  43. I just posted an episode of the C-Realm Podcast that features a conversation I recorded with John Horgan just a few days after his “trainwreck” with Eliezer Yudkowsky. In the interview we switch conversational tracks a lot, but we return to the topic of Singularitarian thinking on many occasions, and John shares his reflections on how things went in his talk with Eliezer.

  44. On the ONE hand: Eliezer, PLEASE try not to come across so condescending. I say this for strictly pragmatic-strategic reasons, OK, kid-o? Especially when dealing with non-transhumanists, non-Singularitarians, and, in general, the *hoi poloi*. We all know you’re a Singularitarian-Savant (as it were…), and we appreciate you for it!!, but *please* TRY (again, if only for pragmatic-strategic reasons if nothing else…) to be a bit more cognitive-intellectually, well, for want of a better word, *gentler*—or perhaps a bit more generous and less assuming (i.e., assuming that you can go even moderately-high-level, much less full-throttle, and expect someone like Horgan [who’s a good enough chap, but not an AI guy!] to keep-up with you!

    SL4 and others have defended your “style”, but I respectfully—along with Michael A.—disagree. You don’t have to water stuff down (well, hopefully, not *much*, anyway)—just think, as a protocol, “present this in a user-friendly way”…

    Now, before y’all jump me, as it were: On the **OTHER** hand: I must admit I’m *appalled* that Horgan (who is, again, a reasonably good science journalist, as the species goes…) seems not to have read (or, hell, even perused/skimmed) **any** of your stuff, Eliezer—and, with all due respect to Horgan, THAT’S PRETTY FRICKIN’ **PISS-POOR**. [And, Eliezer, if at all possible, please try to make sure that any future would-be interviewers indeed ARE “up” at least somewhat on “where you’re coming from” in terms of some of the basic (“intro” and “intermediate”) stuff at your SI site.] But for Horgan *not* to have taken this responsibility on himself—again, that’s just piss-frickin’-poor journalism. And really unworthy of him, since much of his other stuff is, as the genre goes, not too damn bad.

    Anyway, my johnny-come-lately $.02 ;)

  45. Nyu asked what Eliezer’s actual accomplishments are.

    I think of him as having done three things:

    (1) He gave a name to the subject of self-enhancing AI – “seed AI” – and wrote a paper describing schematically how to accomplish it.

    (2) He identified the problem of “Friendly AI” and gave it that name. I actually think that’s his biggest accomplishment to date; his gift to the English language and to everyone’s understanding of the future. Whether you agree with his prescriptions or not, we now have a name for the problem and its solution.

    (3) He proposed a solution to the Friendliness problem, namely Coherent Extrapolated Volition, which rests on (my paraphrase) constructing an ideal moral agent, where ‘ideal’ is defined by reference to the human utility function (or rather, whatever the actual counterpart to ‘utility function’ in human cognitive architecture is).

    He also makes numerous important strategic observations regarding the Singularity, for example, that it’s best to plan on getting it right the first time because you may not get a second chance, and that the ideal would be to know what you’re doing, and to initiate a Friendly Singularity knowing why the outcome will be Friendly (and knowing how it is that you know, etc).

    Such observations can be important correctives for people who haven’t realized them on their own. The first point matters for people who don’t quite get the all-or-nothing quality of a Singularity; the second point matters for people who think that the Singularity has to be a gamble.

    Eliezer’s career has an unusual form because he has specialized from the beginning in something that has not yet happened, and which by the time it happens is beyond human control. In a way, I think he is best understood as a philosopher of AI. He set out planning to do lots of coding, but as things stand it’s SIAI’s Director of Research, Ben Goertzel, who has coding projects going (e.g. OpenCog). Eliezer is doing theory.

  46. Very well-said, Mitchell. And we can only hope that Ben is still committed to his 10-yr (as of ’06) time-frame. Maybe we’ll have the Singularity just in time for the 2016 election-cycle (yippee kai-yay…)

  47. MCP2012 Says:
    [And, Eliezer, if at all possible, please try to make sure that any future would-be interviewers indeed ARE “up” at least somewhat on “where you’re coming from” in terms of some of the basic (”intro” and “intermediate”) stuff at your SI site.] apology..I forgot to mention that i’ve seen Eliezer within the context of the Singularity summit and find that his performance was very much up to par. My opinion was based solely on being in the general public eye.

  48. “(1) He gave a name to the subject of self-enhancing AI – “seed AI” – and wrote a paper describing schematically how to accomplish it”

    Giving a name to an existing concept is a trivial accomplishment. The value of the schematic depends on how specific and distant from the trivial it is – does this paper go beyond a representation of recursive improvement that could be easily produced by many people?

    “(2) He identified the problem of “Friendly AI” and gave it that name.”

    He did not identify the problem; others covered the same ground much earlier, and actually went much farther than Eliezer has done. Again, giving a name to something is not a particularly impressive accomplishment, especially when you cannot clearly establish what that thing is.

    “Whether you agree with his prescriptions or not, we now have a name for the problem and its solution.”

    I can make up names for a problem and its solution right now. Here: xenthisism and multiblem scorbitol. But if I can’t tell you clearly what those things are, or even provide suggestions as to how you might find out, all I’ve done is made up two phrases.

    “(3) He proposed a solution to the Friendliness problem, namely Coherent Extrapolated Volition”

    No, CEV is the hypothesis that a solution can be found by certain undefined methods applied to the population of human moral beliefs. It is not clear what methods would be used, or whether any methods can produce a solution from human moral beliefs. It’s not even clear precisely what the problem is – no defined explanation has ever been given.

Leave a comment