To follow up on the previous post, I think that the critique by Massimo Pigliucci (a philosopher at the City University of New York) of David Chalmers’ Singularity talk does have some good points, but I found his ad hominem arguments so repulsive that it was difficult to bring myself to read past the beginning. I would have the same reaction to a pro-Singularity piece with the same level of introductory ad hominem. (Recall that when I was going after Jacob Albert and Maxwell Barbakow for their ignorant article on the Singularity Summit, I was focusing on their admission of not understanding any of the talks and using that as a negative indicator of their intelligence and knowledge, not insulting their hair-cuts.) If anything, put the ad hominem arguments at the end, so that they don’t bias people before they’ve read the real objections.
Pigliucci is convinced that Chalmers is a dualist, which is not exactly true — he is a monist with respect to consciousness rather than spacetime and matter. I used to be on Dennett’s side of the argument and believed there was no hard problem to speak of, but eventually I was moved to somewhere in-between Chalmers and Dennett, and really do believe that there is an interesting hard problem to be solved, but I doubt that solving it will require the introduction of new laws of physics or ontological primitives. I understand why there are people skeptical of the relevance of Chalmers’ theories of consciousness, but the ideas are quite subtle and it took me 2-3 reads of his landmark paper before I started to even pick up on the concept he was trying to transmit. It may be that Pigliucci does understand Chalmers’ ideas and considers them useless anyway.
Moving on to the actual critique, Pigliucci accuses Chalmers of saying that because computers are getting faster, we can extrapolate that to say that means that AI will eventually happen. I think I do vaguely agree with Chalmers on that one, though the extrapolation is quite fuzzy. Since brains are machines that behave according to (as yet unknown) principles but known basic laws (physics and chemistry), faster computers would surely facilitate its emulation, or at the very least the instantiation of its basic operating principles in another substrate. I’m not sure why this is controversial, unless people are conceiving of the brain as including a magical-sauce that cannot be emulated in another finite state machine.
Even if we don’t yet understand intelligence, as Pigliucci points out, that doesn’t mean that it will remain unknown indefinitely. Chalmers even points out in his talk that he thinks it will take hundreds of years to solve AI. My view is that if anyone confidently says that AI will very likely not be possible in the next 500 years, they’re being overconfident and likely engaging in mystical mind-worship and a desire to preserve the mystery of the mind due to irrational sentimentality. Given the scientific knowledge we’ve gained over the last 500 years (practically all of it), it’s quite far-fetched to say confidently that intelligence will elude reverse-engineering over the next 500 or so years. If biology can be reverse-engineered on many levels, so will intelligence.
Pigliucci then points out that Chalmers is lax on his definitions of the terms “AI”, “AI+”, and “AI++”, which I agree with. He could use at least a couple more slides to define those terms better. Pigliucci then argues that the burden of proof of the points that Chalmers argues for is on him because he has an unusual claim. I agree with that also. Chalmers is approaching an issue as philosophy when what it really could use are detailed scientific arguments to back it up. On the other hand, within groups where these arguments are already accepted (like Singularity Summit), philosophy is indeed possible. Some philosophizing has to rest on scientifically argued foundations that are not shared in common among all thinkers. Isn’t it exciting how philosophy and science are so interdependent and how one can just perish without the other?
I disagree with Pigliucci that the “absent defeaters” points are not meaningful. Chalmers is obviously arguing that something extraordinary would need to happen for his outlined scenario not to occur, and that business as usual over the longer term will involve AI++, rather than its absence. “Defeaters” include things like thermonuclear war, runaway global warming, etc., which Chalmers did concretely point out in his talk (at least at the Singularity Summit version). Pigliucci says, “But if that is the case, and if we are not provided with a classification and analysis of such defeaters, then the entire argument amounts to â€œX is true (unless something proves X not to be true).â€ Not that impressive.” Maybe Chalmers should have spent more time describing the defeaters, but I don’t think that all arguments of the form “X is true (unless something proves X not to be true)” are meaningless. For instance, in physics, objects fall at 9.8 m/s2 unless there is air friction, unless they get hit by another object in mid-fall, unless they spontaneously explode, etc., and the basic law still has meaning, because it applies enough to be useful.
I agree with Tim Tyler in the comments that defining intelligence is not the huge issue that Pigliucci makes it out to be. I do think that g is good enough of an approximate definition (is Pigliucci familiar with the literature on g, such as Gottfredson?), and asking for unreasonably detailed definitions of intelligence even though everyone has a perfectly good intuitive definition of what it means seems to just be a way of discouraging any intelligent conversation on the topic whatsoever. If one would like better definitions of intelligence, I would strongly recommend Shane Legg’s PhD thesis Machine Superintelligence, which gives a definition and an good survey of past attempts at a definition during the first part. I doubt that many will read it though, because people like it when intelligence is mysterious. Mysterious things seem cooler.
Pigliucci then says that AI has barely made any progress over the last few decades because human intelligence is “non-algorithmic”. You mean that it doesn’t follow a procedure to turn data into knowledge and outputs? I don’t see how that could be the case. Many features of human intelligence have already been duplicated in AIs, but as soon as something is duplicated (like master chess), it suddenly loses status as an indicator of intelligence. By moving the goal posts, AI can keep constantly “failing” until the day before the Singularity. Even a Turing Test-passing AI would not be considered intelligent by many people because I’m sure they would find some obscure reason.
After the deployment of the above mentioned highly questionable â€œargument,â€ things just got bizarre in Chalmersâ€™ talk. He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment â€” thereby making a blurred and confused mix out of different notions such as natural selection, artificial selection, physical evolution and virtual evolution.
I agree… sort of. When I was sitting in the audience at Singularity Summit and Chalmers started to talk about virtual evolution, I immediately realized that Chalmers had not likely studied Darwinian population genetics, and was using the word “evolution” in the hand-wavey layman’s sense of the word rather than the strict biological definition. If I recall correctly, someone (I think it was Eliezer) got up at the end of Chalmers’ talk and pointed out that creating intelligence via evolution would require a practically unimaginable amount of computing power, simulating the entire history of the Earth. Yet, I don’t understand why Pigliucci believes that such a thing would be impossible in principle — if evolution could create intelligence out of real atoms on Earth, then simulated evolution could (eventually, given enough computing power) create intelligence out of simulated atoms. Of course, the amount of computing power required could be prohibitively massive, but to argue that reality cannot be simulated precisely enough to reproduce phenomenon X just means that we either don’t know enough about the phenomenon to simulate it yet, or we lack the computing power, not that it is impossible in principle. Science will eventually uncover the underlying rules of everything that it is theoretically possible to uncover the rules of (for instance, not casually disconnected universes), and that includes intelligence, creativity, imagination, humor, dreaming, etc.
Pigliucci then remarks:
Which naturally raised the question of how do we control the Singularity and stop â€œthemâ€ from pushing us into extinction. Chalmersâ€™ preferred solution is either to prevent the â€œleakingâ€ of AI++ into our world, or to select for moral values during the (virtual) evolutionary process. Silly me, I thought that the easiest way to stop the threat of AI++ would be to simply unplug the machines running the alleged virtual world and be done with them. (Incidentally, what does it mean for a virtual intelligence to exist? How does it â€œleakâ€ into our world? Like a Star Trek hologram gone nuts?)
The burden really is on Chalmers here to explain himself. “Leaking out” would consist of an AI building real-world robotics or servants to serve as its eyes, ears, arms, and legs. Pigliucci probably thinks of the virtual and physical worlds as quite distinct, whereas someone of my generation, who grew up witnessing the intimate connection between the real world and the Wired views them more as overlapping magisteria. Still, I can understand the skepticism about the “leaking out” point, and it requires more explanation. Massimo, the reason why unplugging would not be so simple is that an AI would probably exist as an entity distributed across many information networks, yet that is my opinion, not Chalmers’. From Chalmers point of view, I think he might be concerned that the AI would simply deceive the programmers into believing that it was friendly, therefore long-term evaluations in virtual worlds are necessary. Therefore, the unplugging would not be that simple because we wouldn’t want to unplug the AI, because we could be deceived by it.
Then the level of unsubstantiated absurdity escalated even faster: perhaps we are in fact one example of virtual intelligence, said Chalmers, and our Creator may be getting ready to turn us off because we may be about to leak out into his/her/its world. But if not, then we might want to think about how to integrate ourselves into AI++, which naturally could be done by â€œuploadingâ€ our neural structure (Chalmersâ€™ recommendation is one neuron at a time) into the virtual intelligence â€” again, whatever that might mean.
Massimo, he is referring to the simulation argument and the Moravec transfer concepts. The simulation argument can be explored at simulation-argument.com, and the Moravec transfer is summarized at the Mind Uploading home page. I know that these are somewhat unusual concepts that should not be referred to so cavalierly, but you might consider reserving your judgment just a little bit longer until you read academic papers on these ideas. Mind uploading/whole brain emulation has been analyzed in detail by a report from the Future of Humanity Institute at Oxford University.
Pigliucci starts to wrap up:
Finally, Chalmers â€” evidently troubled by his own mortality (well, who isnâ€™t?) â€” expressed the hope that A++ will have the technology (and interest, I assume) to reverse engineer his brain, perhaps out of a collection of scans, books, and videos of him, and bring him back to life. You see, he doesnâ€™t think he will live long enough to actually see the Singularity happen. And thatâ€™s the only part of the talk on which we actually agreed.
Yes, it makes sense that we’d reach out to the possibility of smarter-than-human intelligences to help us solve the engineering problem of aging. Since human biochemistry is non-magical (just like the brain — surprise!) it will only be a matter of time before we start figuring out how to repair metabolic damage faster than it builds up. I’m quite skeptical about Chalmers being genuinely revived from his books and talks, but perhaps an interesting simulacra could be fashioned. While we’re at it, we can bring back Abe Lincoln and his iconic stovepipe hat.
The reason I went on for so long about Chalmersâ€™ abysmal performance is because this is precisely the sort of thing that gives philosophy a bad name. It is nice to see philosophers taking a serious interest in science and bringing their disciplineâ€™s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention. Now, if you will excuse me Iâ€™ll go back to the next episode of Battlestar Galactica, where you can find all the basic ideas discussed by Chalmers presented in an immensely more entertaining manner than his talk.
I disagree that the topics investigated by Chalmers — human-level artificial intelligence, artificial superintelligence, safety issues around AI, methods of creating AI, the simulation argument, whole brain emulation, and the like — are intellectually disrespectable. In fact, there are hundreds of academics who have published very interesting books and papers on these important topics. Still, I think Chalmers could have done a better job of explaining himself, and assumed too much esoteric knowledge in his audience. A talk suited to Singularity Summit should not be so casually repeated to other groups. Yet, it’s his career, so if he wants to take risks like that, he may have to pay the price — criticism from folks like Pigliucci, some of whose gripes may be legitimate. I also think that Pigliucci probably speaks for many others in his critiques, which is a big part of why I think they’re worth taking apart and analyzing.