Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

30Nov/0910

Hanson: Philosophy Kills

Robin Hanson found a skeptical Bryan Caplan when the former explained his positions on cryonics to the latter. ("The more I furrowed my brow, the more earnestly he spoke.") Caplan said:

What disturbed me was when I realized how low he set his threshold for [cryonics] success. Robin didn’t care about biological survival. He didn’t need his brain implanted in a cloned body. He just wanted his neurons preserved well enough to “upload himself” into a computer. To my mind, it was ridiculously easy to prove that “uploading yourself” isn’t life extension. “An upload is merely a simulation. It wouldn’t be you,” I remarked. …

“Suppose we uploaded you while you were still alive. Are you saying that if someone blew your biological head off with a shotgun, you’d still be alive?!” Robin didn’t even blink: “I’d say that I just got smaller.” … I’d like to think that Robin’s an outlier among cryonics advocates, but in my experience, he’s perfectly typical. Fascination with technology crowds out not just philosophy of mind, but common sense.

Hanson responded with an articulate explanation of causal functionalism and the illusory quality of the mind/matter distinction:

Bryan, you are the sum of your parts and their relations. We know where you are and what you are made of; you are in your head, and you are made out of the signals that your brain cells send each other. Humans evolved to think differently about minds versus other stuff, and while that is a useful category of thought, really we can see that minds are made out of the same parts, just arranged differently. Yes, you “feel,” but that just tells you that stuff feels, it doesn’t say you are made of anything besides the stuff you see around and inside you.

Although the argument may seem to be about cryonics on the surface, it is really about the viability of uploading.

Filed under: philosophy 10 Comments
24Nov/0922

Greg Fish: Against Causal Functionalism

Greg Fish, a science writer with a popular blog who contributes to places like Business Week and Discovery News, has lately been advancing a Searleian criticism of causal functionalism. For instance, here and here. Here is an excerpt from the latter:

A Computer Brain is Still Just Code

In the future, if we model an entire brain in real time on the level of every neuron, every signal, and every burst of the neurotransmitter, we’ll just end up with a very complex visualization controlled by a complex set of routines and subroutines.

These models could help neurosurgeons by mimicking what would happen during novel brain surgery, or provide ideas for neuroscientists, but they’re not going to become alive or self aware since as far as a computer is concerned, they live as millions of lines of code based on a multitude of formulas and rules. The real chemistry that makes our brains work will be locked in our heads, far away from the circuitry trying to reproduce its results.

Now, if we built a new generation of computers using organic components, the simulations we could run could have some very interesting results.

On his blog, he says:

The actual chemical reactions that decide on an action or think through a problem don’t take place and the biological wiring that’s the crucial part of how the whole process takes place isn’t there, just a statistical approximation of it.

This is just another version of vitalism. Computers lack the "vital spark" necessary to create the "soul", even if they implement the functions of intelligence and self-reflection even more effectively than the biological entity that inspired their creation. But those functions are what create intelligence and self-reflection, not magic chemistry-that-can-never-ever-be-simulated-even-in-principle.

There is quite a bit of fuzziness in chemical reactions themselves, and not all this fuzziness is necessary to implement intelligence or "self-awareness".

Say we have a molecular dynamics simulation of the brain in complete and utter detail. It behaves exactly the same as the intelligence that it is "simulating". You can say "it's just a simulation", but it can achieve all the same things that the original can, including be your friend or even possibly kill you. In such circumstances, "it's just a simulation" is quite pointless hairsplitting. Certainly, some atomic configurations are conscious and others are not, but there is no vital force that biological molecules possess that high-resolution simulations of those biological molecules would not also possess.

If it walks like a duck, and quacks like a duck, it's still possible that it's not a duck, but if it has a perfect emulation of a duck brain and can walk around in a duck body, then it may as well be a duck.

Filed under: AI, philosophy 22 Comments
19Nov/098

Vague Complexity, Precise Complexity

The word "complexity" is a confusing one. There are two types of complexity -- the vague, layman's term, which seems to mean something like a great chain of being ("the more like us humans it is, the more complex it must be"), and the precise, mathematical term, Kolmogorov complexity, which refers to the measure of computational resources needed to specify the object. If you are familiar with the latter concept, that's what you start to think of whenever someone says "complexity", and people using the layman's sense of the term start sounding vague and/or confused. People working in AI tend to mean Kolmogorov complexity when they say "complexity", so if you hang around with people like that for long enough, it gets ingrained into you.

Since Kolmogorov complexity has a nice mathematical definition, it's very precise. It turns out that lots of not-so-cool things are really complex, like the structure of bread mold, chaotic fluid eddies, or Hadamard's billiards. The definition of Kolmogorov complexity is agnostic towards what kind of complexity you mean. A random series of bits a quadrillion digits long is more complex than a human being, but it isn't particularly more interesting than any other random bitstring. (Update: Actually, I am wrong about this (see the comments), because I was thinking about what I thought of as the "functionally relevant" features of human bodies that make them different from other similar piles of chemicals, but wasn't considering low-level details like precise atomic configurations. If I do, it's more like 1029 bits, if we assume that it would take about 143 bits per atom to specify their location, type, electron states, etc. So my revised statement would be "A random series of bits 1030 digits long is more complex than a human being, but it isn't particularly more interesting than any other random bitstring.")

Describing, with precise mathematics, what distinguishes interesting-to-human complexity from complexity-in-general is the last great task for our civilization. Once we do that, we'll have solved AI, and come up with a theory for generating any interesting picture, song, book, or other work of art that we want. We'll have automated creativity, insight, genius, lateral thinking, and inspiration, all with lifeless algorithms. There will always be an exponential computational universe to explore, so I doubt we'd run out of fun, but we'll know a lot more about its boundaries and features that we do today, if we choose to have that knowledge.

Humans have a very confined range of interests, algorithmically speaking, so we have a tendency to take complexity-in-general and try to reify it through our reality tunnel into human-understandable complex structures, which often includes faces. Imagine staring into a bright screen with a specific pattern for 10 minutes, then turning to a blank wall. That's our situation as humans, but the imprinting happened before we were born and is based on subconscious hidden priors. The human species as a whole suffers from the Dunning-Kurger effect in that we think this is normal, even when our hidden priors fail in spectacular and hilarious ways.

The Dunning-Kruger effect is so strong in some humans that they actually believe that their way of thinking is normative, and that no higher qualitative intelligence levels above Homo sapiens exist. This is analogous to thinking that the Earth is the center of the universe.

Filed under: philosophy 8 Comments
15Nov/090

Toby Ord on BBC for Giving What We Can

A friend and associate of mine, Oxford philosopher Toby Ord, has gained some major coverage on the BBC website. Congratulations, Toby! Toby has pledged 10% of his annual salary, plus any yearly earnings above £20,000, to charities fighting poverty in the developing world. He projects that will amount to about £1M over the course of his career, which he has calculated could save 500,000 years of healthy life.

Toby is participating in what I glibly call "utility war" -- a worldwide war not for money or power, but to achieve the greatest good for the greatest number (positive utility). This could be the war to end all wars. A war we can be pleased to fight.

For more information, see Giving What We Can.

Filed under: ethics, philosophy No Comments
13Nov/0915

The Dream-Computer Interface

Several notes on dreams and the dream-computer interface idea.

Dr J. Allan Hobson, a leading dream researcher with Harvard, is publicizing his hypothesis of dreaming with a press release, "Dreams may have an important physiological function". According to Dr. Hobson, the function of dreams is physiological -- a sort of "mental practice" for the awakened state. Hobson said that "dreams represent a parallel consciousness state that is running continuously, but which is normally suppressed while the person is awake".

If his hypothesis is correct, it has an important implication for rationality. In the mind sciences, people tend to drastically overweight the significance of the ghost or soul -- represented by one's conscious experience and "free will" in various theories. If progress in cognitive science has taught us anything, is it is that this ghost is both an illusion and far less significant than we, in our vanity, think it is. Hobson's theory is that dreaming has a physiological function, an unflattering reflection on pet theories that assign dreaming a psychological role. It could also explain why animals without neocortex can dream, because dreaming is not contingent on higher mental functions that are unique to humans. This makes sense, as one would assume there to exist far more evolutionary complexity dependent on and associated with evolutionarily ancient features rather than extremely recent evolutionary features like general intelligence, though there are a surprising number of features seemingly uniquely associated with general intelligence.

Dr. Hobson is also known for Dream Debate, a 154-minute DVD video where he admirably pounds nails into Freud's coffin, taking on the dream theory that just won't die. Freudian dream interpretation is a example of apophenia, where instead of distrusting yourself and demanding empirical evidence for your theory, you just make shit up that sounds cool and pass it on to others. Apophenia is appealing because anyone can do it, and accordingly, it underlies the delusions of psychic phenomena, "synchronicity", believing God is talking to you when you pray to him, the Forer effect, which underlies astrology and fortune-telling, and -- let's not forget -- Freudian psychology.

Another dream-related update has to do with better brain-computer interfacing technologies being worked on by Ed Boyden at the MIT Media Lab. When I wrote "Brain-Computer Interfaces for Manipulating Dreams", I assumed that microelectrode arrays would be used to interface with the brain on a finely detailed level, accompanied by tiny holes drilled into the skull which would be carefully resealed. Of course, some of my readers recoiled in horror at this idea, even if it would allow them to become oneironauts of a most magnificent variety. However, thanks to Dr. Boyden's new approach of stimulating neurons using optics, we'd still have to carve holes in the skulls (to get the optics in), but they would interface optically rather than electrically, which is significantly less invasive. I notice that Wired coverage of Boyden earlier this year is very brief. No one seems to have jumped on this story yet, except for Technology Review, which is pretty amazing.

High-throughput circuit screening of intact brains, which Boyden's work could enable a cognitive science revolution of the highest order. Here it is, creeping up on us, and no journalists are giving it the coverage it deserves... there are plenty of other stories like this, but unfortunately I don't have the time to write them all up because there aren't enough hours in the day.

As always, happy dreaming!

9Nov/0926

Analysis of Massimo Pigliucci’s Critique of David Chalmers’ Talk on the Singularity

To follow up on the previous post, I think that the critique by Massimo Pigliucci (a philosopher at the City University of New York) of David Chalmers' Singularity talk does have some good points, but I found his ad hominem arguments so repulsive that it was difficult to bring myself to read past the beginning. I would have the same reaction to a pro-Singularity piece with the same level of introductory ad hominem. (Recall that when I was going after Jacob Albert and Maxwell Barbakow for their ignorant article on the Singularity Summit, I was focusing on their admission of not understanding any of the talks and using that as a negative indicator of their intelligence and knowledge, not insulting their hair-cuts.) If anything, put the ad hominem arguments at the end, so that they don't bias people before they've read the real objections.

Pigliucci is convinced that Chalmers is a dualist, which is not exactly true -- he is a monist with respect to consciousness rather than spacetime and matter. I used to be on Dennett's side of the argument and believed there was no hard problem to speak of, but eventually I was moved to somewhere in-between Chalmers and Dennett, and really do believe that there is an interesting hard problem to be solved, but I doubt that solving it will require the introduction of new laws of physics or ontological primitives. I understand why there are people skeptical of the relevance of Chalmers' theories of consciousness, but the ideas are quite subtle and it took me 2-3 reads of his landmark paper before I started to even pick up on the concept he was trying to transmit. It may be that Pigliucci does understand Chalmers' ideas and considers them useless anyway.

Moving on to the actual critique, Pigliucci accuses Chalmers of saying that because computers are getting faster, we can extrapolate that to say that means that AI will eventually happen. I think I do vaguely agree with Chalmers on that one, though the extrapolation is quite fuzzy. Since brains are machines that behave according to (as yet unknown) principles but known basic laws (physics and chemistry), faster computers would surely facilitate its emulation, or at the very least the instantiation of its basic operating principles in another substrate. I'm not sure why this is controversial, unless people are conceiving of the brain as including a magical-sauce that cannot be emulated in another finite state machine.

Even if we don't yet understand intelligence, as Pigliucci points out, that doesn't mean that it will remain unknown indefinitely. Chalmers even points out in his talk that he thinks it will take hundreds of years to solve AI. My view is that if anyone confidently says that AI will very likely not be possible in the next 500 years, they're being overconfident and likely engaging in mystical mind-worship and a desire to preserve the mystery of the mind due to irrational sentimentality. Given the scientific knowledge we've gained over the last 500 years (practically all of it), it's quite far-fetched to say confidently that intelligence will elude reverse-engineering over the next 500 or so years. If biology can be reverse-engineered on many levels, so will intelligence.

Pigliucci then points out that Chalmers is lax on his definitions of the terms "AI", "AI+", and "AI++", which I agree with. He could use at least a couple more slides to define those terms better. Pigliucci then argues that the burden of proof of the points that Chalmers argues for is on him because he has an unusual claim. I agree with that also. Chalmers is approaching an issue as philosophy when what it really could use are detailed scientific arguments to back it up. On the other hand, within groups where these arguments are already accepted (like Singularity Summit), philosophy is indeed possible. Some philosophizing has to rest on scientifically argued foundations that are not shared in common among all thinkers. Isn't it exciting how philosophy and science are so interdependent and how one can just perish without the other?

I disagree with Pigliucci that the "absent defeaters" points are not meaningful. Chalmers is obviously arguing that something extraordinary would need to happen for his outlined scenario not to occur, and that business as usual over the longer term will involve AI++, rather than its absence. "Defeaters" include things like thermonuclear war, runaway global warming, etc., which Chalmers did concretely point out in his talk (at least at the Singularity Summit version). Pigliucci says, "But if that is the case, and if we are not provided with a classification and analysis of such defeaters, then the entire argument amounts to “X is true (unless something proves X not to be true).” Not that impressive." Maybe Chalmers should have spent more time describing the defeaters, but I don't think that all arguments of the form "X is true (unless something proves X not to be true)" are meaningless. For instance, in physics, objects fall at 9.8 m/s2 unless there is air friction, unless they get hit by another object in mid-fall, unless they spontaneously explode, etc., and the basic law still has meaning, because it applies enough to be useful.

I agree with Tim Tyler in the comments that defining intelligence is not the huge issue that Pigliucci makes it out to be. I do think that g is good enough of an approximate definition (is Pigliucci familiar with the literature on g, such as Gottfredson?), and asking for unreasonably detailed definitions of intelligence even though everyone has a perfectly good intuitive definition of what it means seems to just be a way of discouraging any intelligent conversation on the topic whatsoever. If one would like better definitions of intelligence, I would strongly recommend Shane Legg's PhD thesis Machine Superintelligence, which gives a definition and an good survey of past attempts at a definition during the first part. I doubt that many will read it though, because people like it when intelligence is mysterious. Mysterious things seem cooler.

Pigliucci then says that AI has barely made any progress over the last few decades because human intelligence is "non-algorithmic". You mean that it doesn't follow a procedure to turn data into knowledge and outputs? I don't see how that could be the case. Many features of human intelligence have already been duplicated in AIs, but as soon as something is duplicated (like master chess), it suddenly loses status as an indicator of intelligence. By moving the goal posts, AI can keep constantly "failing" until the day before the Singularity. Even a Turing Test-passing AI would not be considered intelligent by many people because I'm sure they would find some obscure reason.

Pigliuci continues:

After the deployment of the above mentioned highly questionable “argument,” things just got bizarre in Chalmers’ talk. He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment — thereby making a blurred and confused mix out of different notions such as natural selection, artificial selection, physical evolution and virtual evolution.

I agree... sort of. When I was sitting in the audience at Singularity Summit and Chalmers started to talk about virtual evolution, I immediately realized that Chalmers had not likely studied Darwinian population genetics, and was using the word "evolution" in the hand-wavey layman's sense of the word rather than the strict biological definition. If I recall correctly, someone (I think it was Eliezer) got up at the end of Chalmers' talk and pointed out that creating intelligence via evolution would require a practically unimaginable amount of computing power, simulating the entire history of the Earth. Yet, I don't understand why Pigliucci believes that such a thing would be impossible in principle -- if evolution could create intelligence out of real atoms on Earth, then simulated evolution could (eventually, given enough computing power) create intelligence out of simulated atoms. Of course, the amount of computing power required could be prohibitively massive, but to argue that reality cannot be simulated precisely enough to reproduce phenomenon X just means that we either don't know enough about the phenomenon to simulate it yet, or we lack the computing power, not that it is impossible in principle. Science will eventually uncover the underlying rules of everything that it is theoretically possible to uncover the rules of (for instance, not casually disconnected universes), and that includes intelligence, creativity, imagination, humor, dreaming, etc.

Pigliucci then remarks:

Which naturally raised the question of how do we control the Singularity and stop “them” from pushing us into extinction. Chalmers’ preferred solution is either to prevent the “leaking” of AI++ into our world, or to select for moral values during the (virtual) evolutionary process. Silly me, I thought that the easiest way to stop the threat of AI++ would be to simply unplug the machines running the alleged virtual world and be done with them. (Incidentally, what does it mean for a virtual intelligence to exist? How does it “leak” into our world? Like a Star Trek hologram gone nuts?)

The burden really is on Chalmers here to explain himself. "Leaking out" would consist of an AI building real-world robotics or servants to serve as its eyes, ears, arms, and legs. Pigliucci probably thinks of the virtual and physical worlds as quite distinct, whereas someone of my generation, who grew up witnessing the intimate connection between the real world and the Wired views them more as overlapping magisteria. Still, I can understand the skepticism about the "leaking out" point, and it requires more explanation. Massimo, the reason why unplugging would not be so simple is that an AI would probably exist as an entity distributed across many information networks, yet that is my opinion, not Chalmers'. From Chalmers point of view, I think he might be concerned that the AI would simply deceive the programmers into believing that it was friendly, therefore long-term evaluations in virtual worlds are necessary. Therefore, the unplugging would not be that simple because we wouldn't want to unplug the AI, because we could be deceived by it.

Pigliucci says:

Then the level of unsubstantiated absurdity escalated even faster: perhaps we are in fact one example of virtual intelligence, said Chalmers, and our Creator may be getting ready to turn us off because we may be about to leak out into his/her/its world. But if not, then we might want to think about how to integrate ourselves into AI++, which naturally could be done by “uploading” our neural structure (Chalmers’ recommendation is one neuron at a time) into the virtual intelligence — again, whatever that might mean.

Massimo, he is referring to the simulation argument and the Moravec transfer concepts. The simulation argument can be explored at simulation-argument.com, and the Moravec transfer is summarized at the Mind Uploading home page. I know that these are somewhat unusual concepts that should not be referred to so cavalierly, but you might consider reserving your judgment just a little bit longer until you read academic papers on these ideas. Mind uploading/whole brain emulation has been analyzed in detail by a report from the Future of Humanity Institute at Oxford University.

Pigliucci starts to wrap up:

Finally, Chalmers — evidently troubled by his own mortality (well, who isn’t?) — expressed the hope that A++ will have the technology (and interest, I assume) to reverse engineer his brain, perhaps out of a collection of scans, books, and videos of him, and bring him back to life. You see, he doesn’t think he will live long enough to actually see the Singularity happen. And that’s the only part of the talk on which we actually agreed.

Yes, it makes sense that we'd reach out to the possibility of smarter-than-human intelligences to help us solve the engineering problem of aging. Since human biochemistry is non-magical (just like the brain -- surprise!) it will only be a matter of time before we start figuring out how to repair metabolic damage faster than it builds up. I'm quite skeptical about Chalmers being genuinely revived from his books and talks, but perhaps an interesting simulacra could be fashioned. While we're at it, we can bring back Abe Lincoln and his iconic stovepipe hat.

Pigliucci's conclusion:

The reason I went on for so long about Chalmers’ abysmal performance is because this is precisely the sort of thing that gives philosophy a bad name. It is nice to see philosophers taking a serious interest in science and bringing their discipline’s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention. Now, if you will excuse me I’ll go back to the next episode of Battlestar Galactica, where you can find all the basic ideas discussed by Chalmers presented in an immensely more entertaining manner than his talk.

I disagree that the topics investigated by Chalmers -- human-level artificial intelligence, artificial superintelligence, safety issues around AI, methods of creating AI, the simulation argument, whole brain emulation, and the like -- are intellectually disrespectable. In fact, there are hundreds of academics who have published very interesting books and papers on these important topics. Still, I think Chalmers could have done a better job of explaining himself, and assumed too much esoteric knowledge in his audience. A talk suited to Singularity Summit should not be so casually repeated to other groups. Yet, it's his career, so if he wants to take risks like that, he may have to pay the price -- criticism from folks like Pigliucci, some of whose gripes may be legitimate. I also think that Pigliucci probably speaks for many others in his critiques, which is a big part of why I think they're worth taking apart and analyzing.

19Oct/095

The Connection Between Stimuli and Pleasure/Pain is Arbitrary, an Objective Fact that Has Relatively Little to Do with One’s Personal Tech Habits

My thoughts on sex after the Singularity were picked up by a blogger on CNET, Chris Matyszczyk, so I thought I'd react a little bit. He writes:

Indeed, Retrevo's findings are so disturbing that I wonder whether the roboticists are right to suggest that sex should be a matter of adjusting one's own chemistry rather than attempting to consort with another human. To wit, in the words of blogger Michael Anissimov, one of the "leading thinkers in the radical tech community" who were invited to pontificate in the lustrous pages of H Plus magazine: "The connection between certain activities and the sensation of pleasure lies entirely in our cognitive architecture, which we will eventually manipulate at will."

I am haunted by the drastic prognostications by the salivators over The Singularity about the future of sex. Indeed, some words of Anissimov are rattling around my head like those of a particularly angry former lover. Speaking of this beautiful future, he said: "I could make any experience in the world highly pleasurable or highly displeasurable. I could make sex suck and staring at paint drying the greatest thing ever."

I'm not trying to sell a particular future. It is a physical fact about our brains that the connections between stimuli and pleasure/displeasure are arbitrary and exist mostly for evolutionary reasons. There is no fundamental reason why we won't eventually be able to play around with them. This is a fact that has existed in the abstract since the dawn of brains -- I didn't make it up. Ever since the first nervous systems evolved, their pleasure/pain-stimuli connections had to be directed by evolution and natural selection.

The point of my comments in that sex article is that these connections are arbitrary and we will eventually modify them if we wish, because the mind is not magical, it's "just" a machine. Matyszczyk's slightly uncomfortable reaction to this objective fact shows that he hasn't been exposed to it enough. The alternative to my position is projectionism -- pretending that certain features of reality, like sex, are inherently fun rather than fun because evolution made it that way. There is nothing wrong with understanding that things are fun or not fun primarily because we evolved to interpret them as such. No activity is inherently anything. Amusement parks are not inherently fun, they're just fun because of the complex interactions between external stimuli and our brains. An alien from the Pleiades, or even certain human beings, might find an amusement park horrifying.

Drawing a connection between realizing the arbitrary linkage between stimuli and pleasure and the report he cited, which simply suggests that 35% or something of people under 35 check Twitter "after sex", is foolish. One is an abstract philosophical/cognitive science issue, the other has to do with technophilia among people under middle age. I can be a person who realizes the arbitrary linkage between stimuli and pleasure and be the most "human-like" dude you can imagine, like one of the inhabitants of Zion in that incredibly lame party scene at the beginning of the second Matrix movie.

It is also very foolish to suggest that just because someone offers their thoughts on sex after the Singularity, that one is "salivating" over the Singularity, or whatever. I have a very well-rounded and healthy life, similar to many of the "well-adjusted" stiffs who squirm awkwardly in their chairs whenever conversation moves to topics more abstract than the latest tech gossip, the latest episode of Mad Men, or the wine and food. I have no idea if Matyszczyk really thinks like that, but condemning me to "salivation" for spending 10 minutes writing down my thoughts on sex after the Singularity is not a good sign. There seems to be some assumption that if someone is interested in any significant way in a non-mainstream topic, then they must be "obsessed" or "salivating" over it. It seems like a not-so-subtle way of socially punishing people who have the gall to focus on non-mainstream topics.

Indeed, I don't even see what's so wrong about checking Twitter "after sex". The statement elicits a mental image of someone dashing right away from sex to checking Twitter, but that is misleading. This is just shock imagery that makes it easier to promote your study. Maybe you're someone that has sex several times a day, and if you spent half an hour each time after sex staring into your partner's eyes and chatting lovingly, you would never have a chance to really check out Twitter. Maybe older people make a bigger deal out of sex because they have it less often. If it isn't offensive to start reading a book or (heaven forbid) watch television 5-10 minutes after you're done getting it on, then why the hell is it such a huge deal to check out Twitter? Maybe people actually like to move on to other things after they have sex, because they're like, not as horny any more, because they just (surprise!) had sex. Jesus Christ.

18Oct/090

Future Shock Levels as Point Estimates

My friend and associate Peter de Blanc has an interesting post up recently, on how the point-estimate nature of popular futurist prediction signifies a fundamentally non-probabilistic way of thinking about the future and possible future technologies. We tend to think in terms of black-and-white, yes-or-no, rather than probabilities, because it's easier for us to handle. For instance, most people don't represent the likelihood of catastrophic climate change as a probability -- they tend to think in terms of "it will happen" or "it won't". I find myself falling into this way of thinking constantly, and have to exert deliberate effort to preserve a probabilistic frame of mind.

9Oct/090

Does the Universe in Fact Contain Almost No Information?

Another dude I met at the Summit who I liked was Singularitarian Max Tegmark. He was a lot taller than I imagined. My favorite paper of his has always been "Does the Universe in Fact Contain Almost No Information?", which fits in with a theory I came up independently (and I'm pretty sure has been postulated elsewhere) that we probably live in the simplest possible universe that can contain conscious entities. Another interesting paper from 2007 is "Shut up and calculate", which explores Max's concept of a "level IV" universe that contains every mathematically possible structure.

Filed under: philosophy No Comments
9Oct/090

Risks with Low Probabilities and High Stakes

One of the people I met at the Summit who I got along with was Toby Ord. Toby is the mind behind Giving What We Can. I've looked at his website and papers before, but now I'm back for more. You can read along by checking out "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes", a paper by Toby Ord, Rafaela Hillerbrand, and Anders Sandberg. Here is a random quote:

Flawed arguments are not rare. One way to estimate the frequency of major flaws in academic papers is to look at the proportion which are formally retracted after publication. While some retractions are due to misconduct, most are due to unintentional errors. Using the MEDLINE database (Cokol, Iossifov et al. 2007) found a raw retraction rate of 6.3 ⋅ 10-5, but used a statistical model to estimate that the retraction rate would actually be between 0.001 and 0.01 if all journals received the same level of scrutiny as those in the top tier. This would suggest that P(¬A) > 0.001, making our earlier estimate rather optimistic. We must also remember that an argument can easily be flawed without warranting retraction.

I would guess that 1 in 50 scientific papers have flawed arguments, if not more. Of all the presentations at the Summit, I found Robin Hanson's on science among the most valuable. (Videos will be online in a few weeks, not longer than that.) He basically says that scientists work to please their institutions and that science is about gaining status, not finding the truth. Robin points out that most people who give large amounts of money to academic institutions are surprisingly uninterested in the specifics of what the scientists there actually do. What a shocker that an academic is willing to speak truth to other academics. I can't wait until he gains greater prominence, to see what twisted delusionists will step forward to attack him.

Filed under: philosophy No Comments
6Aug/092

Phil Goetz: Exterminating Life is Rational

Phil Goetz has a nice post up at Less Wrong that argues that we will eventually inevitably eliminate ourselves as a species unless one of the following things happens:

*We can outrun the danger: We can spread life to other planets, and to other solar systems, and to other galaxies, faster than we can spread destruction.
*Technology will not continue to develop, but will stabilize in a state in which all defensive technologies provide absolute, 100%, fail-safe protection against all offensive technologies.
* People will stop having conflicts.
* Rational agents incorporate the benefits to others into their utility functions.
* Rational agents with long lifespans will protect the future for themselves.
* Utility functions will change so that it is no longer rational for decision-makers to take tiny chances of destroying life for any amount of utility gains.
* Independent agents will cease to exist, or to be free (the Singleton scenario).

He looks at each of these possibilities one by one.

Filed under: philosophy 2 Comments
5Jun/0922

Bad News for Conservatives

From Eurekalert, Easily grossed out? You might be a conservative! Excerpt:

Liberals and conservatives disagree about whether disgust has a valid place in making moral judgments, Pizarro noted. Conservatives have argued that there is inherent wisdom in repugnance; that feeling disgusted about something -- gay sex between consenting adults, for example -- is cause enough to judge it wrong or immoral, even lacking a concrete reason. Liberals tend to disagree, and are more likely to base judgments on whether an action or a thing causes actual harm.

Actual harm -- what a wild concept!

Just to save this post from turning into a political flame war, let me point out that there are certainly good things about some forms of "conservatism", such as capitalism, but when it comes to moral reasoning among some conservatives, I am frequently disturbed.

Filed under: philosophy 22 Comments