Analysis of Massimo Pigliucci’s Critique of David Chalmers’ Talk on the Singularity

To follow up on the previous post, I think that the critique by Massimo Pigliucci (a philosopher at the City University of New York) of David Chalmers’ Singularity talk does have some good points, but I found his ad hominem arguments so repulsive that it was difficult to bring myself to read past the beginning. I would have the same reaction to a pro-Singularity piece with the same level of introductory ad hominem. (Recall that when I was going after Jacob Albert and Maxwell Barbakow for their ignorant article on the Singularity Summit, I was focusing on their admission of not understanding any of the talks and using that as a negative indicator of their intelligence and knowledge, not insulting their hair-cuts.) If anything, put the ad hominem arguments at the end, so that they don’t bias people before they’ve read the real objections.

Pigliucci is convinced that Chalmers is a dualist, which is not exactly true — he is a monist with respect to consciousness rather than spacetime and matter. I used to be on Dennett’s side of the argument and believed there was no hard problem to speak of, but eventually I was moved to somewhere in-between Chalmers and Dennett, and really do believe that there is an interesting hard problem to be solved, but I doubt that solving it will require the introduction of new laws of physics or ontological primitives. I understand why there are people skeptical of the relevance of Chalmers’ theories of consciousness, but the ideas are quite subtle and it took me 2-3 reads of his landmark paper before I started to even pick up on the concept he was trying to transmit. It may be that Pigliucci does understand Chalmers’ ideas and considers them useless anyway.

Moving on to the actual critique, Pigliucci accuses Chalmers of saying that because computers are getting faster, we can extrapolate that to say that means that AI will eventually happen. I think I do vaguely agree with Chalmers on that one, though the extrapolation is quite fuzzy. Since brains are machines that behave according to (as yet unknown) principles but known basic laws (physics and chemistry), faster computers would surely facilitate its emulation, or at the very least the instantiation of its basic operating principles in another substrate. I’m not sure why this is controversial, unless people are conceiving of the brain as including a magical-sauce that cannot be emulated in another finite state machine.

Even if we don’t yet understand intelligence, as Pigliucci points out, that doesn’t mean that it will remain unknown indefinitely. Chalmers even points out in his talk that he thinks it will take hundreds of years to solve AI. My view is that if anyone confidently says that AI will very likely not be possible in the next 500 years, they’re being overconfident and likely engaging in mystical mind-worship and a desire to preserve the mystery of the mind due to irrational sentimentality. Given the scientific knowledge we’ve gained over the last 500 years (practically all of it), it’s quite far-fetched to say confidently that intelligence will elude reverse-engineering over the next 500 or so years. If biology can be reverse-engineered on many levels, so will intelligence.

Pigliucci then points out that Chalmers is lax on his definitions of the terms “AI”, “AI+”, and “AI++”, which I agree with. He could use at least a couple more slides to define those terms better. Pigliucci then argues that the burden of proof of the points that Chalmers argues for is on him because he has an unusual claim. I agree with that also. Chalmers is approaching an issue as philosophy when what it really could use are detailed scientific arguments to back it up. On the other hand, within groups where these arguments are already accepted (like Singularity Summit), philosophy is indeed possible. Some philosophizing has to rest on scientifically argued foundations that are not shared in common among all thinkers. Isn’t it exciting how philosophy and science are so interdependent and how one can just perish without the other?

I disagree with Pigliucci that the “absent defeaters” points are not meaningful. Chalmers is obviously arguing that something extraordinary would need to happen for his outlined scenario not to occur, and that business as usual over the longer term will involve AI++, rather than its absence. “Defeaters” include things like thermonuclear war, runaway global warming, etc., which Chalmers did concretely point out in his talk (at least at the Singularity Summit version). Pigliucci says, “But if that is the case, and if we are not provided with a classification and analysis of such defeaters, then the entire argument amounts to “X is true (unless something proves X not to be true).” Not that impressive.” Maybe Chalmers should have spent more time describing the defeaters, but I don’t think that all arguments of the form “X is true (unless something proves X not to be true)” are meaningless. For instance, in physics, objects fall at 9.8 m/s2 unless there is air friction, unless they get hit by another object in mid-fall, unless they spontaneously explode, etc., and the basic law still has meaning, because it applies enough to be useful.

I agree with Tim Tyler in the comments that defining intelligence is not the huge issue that Pigliucci makes it out to be. I do think that g is good enough of an approximate definition (is Pigliucci familiar with the literature on g, such as Gottfredson?), and asking for unreasonably detailed definitions of intelligence even though everyone has a perfectly good intuitive definition of what it means seems to just be a way of discouraging any intelligent conversation on the topic whatsoever. If one would like better definitions of intelligence, I would strongly recommend Shane Legg’s PhD thesis Machine Superintelligence, which gives a definition and an good survey of past attempts at a definition during the first part. I doubt that many will read it though, because people like it when intelligence is mysterious. Mysterious things seem cooler.

Pigliucci then says that AI has barely made any progress over the last few decades because human intelligence is “non-algorithmic”. You mean that it doesn’t follow a procedure to turn data into knowledge and outputs? I don’t see how that could be the case. Many features of human intelligence have already been duplicated in AIs, but as soon as something is duplicated (like master chess), it suddenly loses status as an indicator of intelligence. By moving the goal posts, AI can keep constantly “failing” until the day before the Singularity. Even a Turing Test-passing AI would not be considered intelligent by many people because I’m sure they would find some obscure reason.

Pigliuci continues:

After the deployment of the above mentioned highly questionable “argument,” things just got bizarre in Chalmers’ talk. He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment — thereby making a blurred and confused mix out of different notions such as natural selection, artificial selection, physical evolution and virtual evolution.

I agree… sort of. When I was sitting in the audience at Singularity Summit and Chalmers started to talk about virtual evolution, I immediately realized that Chalmers had not likely studied Darwinian population genetics, and was using the word “evolution” in the hand-wavey layman’s sense of the word rather than the strict biological definition. If I recall correctly, someone (I think it was Eliezer) got up at the end of Chalmers’ talk and pointed out that creating intelligence via evolution would require a practically unimaginable amount of computing power, simulating the entire history of the Earth. Yet, I don’t understand why Pigliucci believes that such a thing would be impossible in principle — if evolution could create intelligence out of real atoms on Earth, then simulated evolution could (eventually, given enough computing power) create intelligence out of simulated atoms. Of course, the amount of computing power required could be prohibitively massive, but to argue that reality cannot be simulated precisely enough to reproduce phenomenon X just means that we either don’t know enough about the phenomenon to simulate it yet, or we lack the computing power, not that it is impossible in principle. Science will eventually uncover the underlying rules of everything that it is theoretically possible to uncover the rules of (for instance, not casually disconnected universes), and that includes intelligence, creativity, imagination, humor, dreaming, etc.

Pigliucci then remarks:

Which naturally raised the question of how do we control the Singularity and stop “them” from pushing us into extinction. Chalmers’ preferred solution is either to prevent the “leaking” of AI++ into our world, or to select for moral values during the (virtual) evolutionary process. Silly me, I thought that the easiest way to stop the threat of AI++ would be to simply unplug the machines running the alleged virtual world and be done with them. (Incidentally, what does it mean for a virtual intelligence to exist? How does it “leak” into our world? Like a Star Trek hologram gone nuts?)

The burden really is on Chalmers here to explain himself. “Leaking out” would consist of an AI building real-world robotics or servants to serve as its eyes, ears, arms, and legs. Pigliucci probably thinks of the virtual and physical worlds as quite distinct, whereas someone of my generation, who grew up witnessing the intimate connection between the real world and the Wired views them more as overlapping magisteria. Still, I can understand the skepticism about the “leaking out” point, and it requires more explanation. Massimo, the reason why unplugging would not be so simple is that an AI would probably exist as an entity distributed across many information networks, yet that is my opinion, not Chalmers’. From Chalmers point of view, I think he might be concerned that the AI would simply deceive the programmers into believing that it was friendly, therefore long-term evaluations in virtual worlds are necessary. Therefore, the unplugging would not be that simple because we wouldn’t want to unplug the AI, because we could be deceived by it.

Pigliucci says:

Then the level of unsubstantiated absurdity escalated even faster: perhaps we are in fact one example of virtual intelligence, said Chalmers, and our Creator may be getting ready to turn us off because we may be about to leak out into his/her/its world. But if not, then we might want to think about how to integrate ourselves into AI++, which naturally could be done by “uploading” our neural structure (Chalmers’ recommendation is one neuron at a time) into the virtual intelligence — again, whatever that might mean.

Massimo, he is referring to the simulation argument and the Moravec transfer concepts. The simulation argument can be explored at simulation-argument.com, and the Moravec transfer is summarized at the Mind Uploading home page. I know that these are somewhat unusual concepts that should not be referred to so cavalierly, but you might consider reserving your judgment just a little bit longer until you read academic papers on these ideas. Mind uploading/whole brain emulation has been analyzed in detail by a report from the Future of Humanity Institute at Oxford University.

Pigliucci starts to wrap up:

Finally, Chalmers — evidently troubled by his own mortality (well, who isn’t?) — expressed the hope that A++ will have the technology (and interest, I assume) to reverse engineer his brain, perhaps out of a collection of scans, books, and videos of him, and bring him back to life. You see, he doesn’t think he will live long enough to actually see the Singularity happen. And that’s the only part of the talk on which we actually agreed.

Yes, it makes sense that we’d reach out to the possibility of smarter-than-human intelligences to help us solve the engineering problem of aging. Since human biochemistry is non-magical (just like the brain — surprise!) it will only be a matter of time before we start figuring out how to repair metabolic damage faster than it builds up. I’m quite skeptical about Chalmers being genuinely revived from his books and talks, but perhaps an interesting simulacra could be fashioned. While we’re at it, we can bring back Abe Lincoln and his iconic stovepipe hat.

Pigliucci’s conclusion:

The reason I went on for so long about Chalmers’ abysmal performance is because this is precisely the sort of thing that gives philosophy a bad name. It is nice to see philosophers taking a serious interest in science and bringing their discipline’s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention. Now, if you will excuse me I’ll go back to the next episode of Battlestar Galactica, where you can find all the basic ideas discussed by Chalmers presented in an immensely more entertaining manner than his talk.

I disagree that the topics investigated by Chalmers — human-level artificial intelligence, artificial superintelligence, safety issues around AI, methods of creating AI, the simulation argument, whole brain emulation, and the like — are intellectually disrespectable. In fact, there are hundreds of academics who have published very interesting books and papers on these important topics. Still, I think Chalmers could have done a better job of explaining himself, and assumed too much esoteric knowledge in his audience. A talk suited to Singularity Summit should not be so casually repeated to other groups. Yet, it’s his career, so if he wants to take risks like that, he may have to pay the price — criticism from folks like Pigliucci, some of whose gripes may be legitimate. I also think that Pigliucci probably speaks for many others in his critiques, which is a big part of why I think they’re worth taking apart and analyzing.

Comments

  1. I don’t respond to rants if I can help it, but I’ll respond to some of the reasonable remarks above. Anyone who’s interested might also check out the video of my talk at the Summit [http://www.vimeo.com/7320820] to see for themselves.

    As I said in my talk, nothing essential depends on defining “intelligence” , “AI+”, and so on. All one needs is a correlated self-amplifying parameter: a parameter g that tracks the capacity to create systems with g, and that tracks certain other capacities A, B, C (where a parameter tracks a capacity roughly if increasing the parameter beyond a relevant point tends to increase the capacity). The “generality thesis” in the talk says that there is such a parameter. Given the generality thesis (and given slightly tighter definitions of the key notions), it follows that given any system with the capacity to create a system with g greater than their own, then absent defeaters (which are defined as anything that prevents systems from manifesting their capacities), capacities A, B, and C will explode. I didn’t argue for the generality thesis in the talk, but prima facie it is plausible for parameters such as the ability to design algorithms, and indeed for “g” from the intelligence literature, with respect to various correlated capacities. One can certainly argue about just what sort of correlations and self-amplification it is reasonable to expect — but nothing depends on vague notions such as “intelligence”.

    Likewise, not much depends on extrapolation. Given a correlated self-amplifying parameter g, the explosion will take care of itself, The only point where something like extrapolation is needed is in the case that we will get to the point where self-amplification will happen. Here, what was needed was the claim that we will create human-level AI (i.e. a human level of the parameter g) using an extendible method (one such that we have the capacity to implement the method better, and such that implementing the method better will lead to increased values of g). From there the result follows. Perhaps something like “extrapolation” is involved in the claim that there will be human-level AI created by an extendible method, but I think the claim is prima facie plausible and at least the substantive assumption is made clear.

    Re evolution: as I said to Eliezer at the Summit, the relevant sort of evolution for our purposes here includes evolution by artificial selection as well as natural selection. That is entirely standard in the field of evolutionary computation (a field I know very well), and avoids the worries above. Of course artificial selection raises other issues (what do we select for, does this give the evolved AI some information about ourselves?) — but that’s par for the course. In any case, evolution doesn’t play any essential role in the arguments of the talk. Enough people seem to have gotten distracted by it that I will probably drop it from future versions.

    “Leaking” consists in AI systems coming to be in a position to control real-world effectors outside their virtual world, of course. As I said in the talk, full leakproofness is impossible or pointless (a display screen is such an effector). And it is likely that when it comes to AI++, attempts at containment are futile. Nevertheless, it seems reasonable to hold that when it comes to AI and perhaps AI+, attempts at containment are not entirely futile: at least, they significantly increase the chances of a controllable outcome in the early stages, compared to creating AI outside simulated worlds. So if we are putting together maxims for maximizing the chances of a benign singularity, creation of AI in a simulated world would seem to be high among them. Of course this is no panacea — much, much else is required too.

    As for my career, it’s going just fine. A rant by some guy on a blog won’t hurt it. I haven’t put myself forward as any sort of singularity expert. I talked on the topic at the Summit because I was invited to (I’m a singularity amateur, as I said there), and at CUNY because they asked for a very accessible talk (I first offered them a technical talk on Kaplan’s paradox, but they asked for something easy). As I said in both talks, I’m offering only very sketchy and speculative thoughts here, not a rigorous philosophical argument. But I think there is a place for that in philosophy.

    Certainly, if someone thinks that we shouldn’t speculate about a highly significant possibility in the future because it “probably” will not occur, I’d say that they need to rethink elementary decision theory. If there is even a 5% chance that the singularity will occur we need to think seriously about it. If it occurs, that changes everything. So serious philosophical and scientific thought is required here. I hope that serious academics will not be put off by the possibility of a hostile reception, and will engage in serious thinking about these future possibilities.

  2. Re: artificial selection – it seems likely that we will see both intelligent selection and a form of intelligently-directed mutation – since we will have minimally intelligent agents before we have made a superintelligence.

    …and if we have automated deductive and inductive reasoning, Occam’s razor, intelligent mutations and selection and a generate-and-test cycle, then that begins to look more like engineering design than what we have seen in biology so far.

    Technically, it may still be “evolution” – but it can really helps to throw in some qualifiers – perhaps something like: “evolution – but not as we know it” :-)

  3. I genuinely don’t accept this specific post. Nonetheless, I had searched with Google and I’ve found out that you’re appropriate and I had been thinking in the improper way. Keep on creating top quality material comparable to this.

  4. Cedrick Gridley

    Needed to compose you a tiny note to finally thank you very much yet again for your personal splendid methods you have discussed above. It is strangely open-handed with people like you to provide publicly all that a number of people would have marketed as an electronic book to generate some bucks for their own end, primarily now that you could possibly have tried it if you ever wanted.

  5. I had been honored to receive a call coming from a friend as he observed the critical points shared within your web site. Looking at your weblog publication is actually a real fantastic experience. Thank you for thinking of readers like me, and I desire for you the top of achievements for a professional surface location. resorts in cuba

  6. This website is really a stroll-by for all the information you wanted about this and didn’t know who to ask. Glimpse here, and also you’ll positively discover it.

  7. wonderful issues altogether, you simply won a logo new reader. What might you suggest about your put up that you simply made some days ago? Any positive?

  8. free bets

    Considerably, the article is really the greatest on this worthw hile topic. I concur with your conclusions and also certainly will eagerly appear forward to your approaching updates. Saying thanks will definitely not simply just be sufficient, for the exceptional clarity within your writing. I can at once grab your rss feed to stay informed of any kind of updates. Genuine function and significantly success in your business dealings!

  9. It’s truly a fantastic and beneficial piece of information. I’m pleased that you just shared this beneficial information with us. Please stay us informed like this. Thank you for sharing.

  10. Spot lets begin operate on this write-up, I definitely think this web page requirements far additional consideration. I’ll probably be once additional to find out an excellent deal much more, thanks for that details.

  11. Your weblog is truly an enlightenment for us readers! Thank you very a lot!

  12. ohh man you save me a lot of time . i will go through your list and check which one is the best for my business .|

  13. We are a bunch of volunteers and opening a brand new scheme in our community. Your web site provided us with helpful information to paintings on. You’ve done a formidable process and our entire group can be grateful to you.

  14. Wonderful site you’ve right here.|

  15. Real instructive and great bodily structure of articles , now that’s user genial (:.

  16. It’s perfect time to make some plans for the future and it’s time to be happy. I’ve read this publish and if I may just I desire to suggest you few attention-grabbing issues or advice. Perhaps you can write subsequent articles regarding this article. I wish to learn even more issues about it!

  17. Good web site! I truly love how it is simple on my eyes and the data are well written. I’m wondering how I could be notified when a new post has been made. I’ve subscribed to your feed which must do the trick! Have a great day!

  18. I really like your writing style, wonderful information, thankyou for posting : D.

  19. There are some attention-grabbing points in time in this article but I don’t know if I see all of them heart to heart. There’s some validity however I will take hold opinion until I look into it further. Good article , thanks and we want more! Added to FeedBurner as properly

  20. It’s hard to search out knowledgeable folks on this topic, but you sound like you realize what you’re talking about! Thanks

  21. I wish to convey my gratitude for your kind-heartedness for individuals who really want help on the field. Your real commitment to passing the solution all over had been extraordinarily invaluable and has surely empowered men and women just like me to attain their objectives. Your personal invaluable publication means a great deal a person like me and substantially more to my colleagues. Thanks a ton; from all of us.

  22. I found your weblog site on google and test a couple of of your early posts. Proceed to maintain up the very good operate. I simply further up your RSS feed to my MSN Information Reader. Looking for forward to reading extra from you afterward!…more tips i found on!

  23. very good put up, i definitely love this web site, keep on itUseful info!

  24. After examine just a few of the blog posts in your website now, and I actually like your manner of blogging. I bookmarked it to my bookmark website record and will likely be checking back soon. Pls check out my web page as properly and let me know what you think.

  25. I discovered your weblog website on google and verify a number of of your early posts. Continue to maintain up the very good operate. I simply extra up your RSS feed to my MSN News Reader. Searching for forward to reading more from you afterward!…more tips i found on!

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>