On Religion

Looking around, there isn’t much evidence of divine intervention in human society. All the claims of “miracles” or “divine help” seem to be either too far in the past to test, badly documented, riddled with loopholes and logical fallacies, or all three. This lack of divine intervention has lead to an enormously large amount of suffering in the world- just think of how many people could have been saved if God has pulled a few strings and stopped the Holocaust. This rules out a God that is both benevolent and omnipotent- ve can be one or the other, or neither, but not both or we’d observe a different experimental result. This leaves three possibilities:

1. God is neither omnipotent nor benevolent. In this case, you might as well say that he isn’t really God, so this is equivalent to saying “God does not exist”.

2. God is benevolent but not omnipotent- he wants to help us but can’t.

3. God is omnipotent but not benevolent, so he could intervene to help but chooses not to, making him evil.

#3 you can effectively rule out, because if God were evil, there are a heck of a lot more evil utility functions that simply wipe us out rather than refraining from interfering in a bad situation. Therefore, it would be tremendously likely given an evil God that we would not exist, contradicting experiment.

#1 is the conclusion that most people who think about this problem come to, primarily because it’s simple. But if you assume that the Universe is infinite- as seems to be the general assumption nowadays- then it is mathematically certain that God must exist, because any possible combination of atoms will be repeated an infinite number of times. This leaves possibility #2- there is a God, in fact there are infinitely many Gods, but they’re all outside our past light cone, so they can’t help us. Note that there are also an infinite number of malevolent Gods, so this could be construed as good luck rather than bad luck.

This leaves the question of what to do- we still have the problem of suffering, after all, and a God outside our light cones obviously can’t respond to prayers. The best solution is to create our own God, or a close facsimile thereof, that we can design to be nice in every possible aspect ahead of time. Exactly how to do this is a very complicated subject, but that should be the goal of anyone looking to get rid of the huge amounts of suffering on Earth today.

Conservation of Annoyance

The fundamental law of Conservation of Annoyance:

When there are two or more widely known ways in which to accomplish a task X, they will both be equally annoying.

The reasoning behind this principle is rather simple, and extends to other optimization processes besides humans (but only for large groups). Note that for optimization processes in general, you should replace “annoyance” with “negative utility” to avoid anthropomorphisizing.

When there are multiple methods to accomplish some goal X, each one will have many factors that could be annoying- how much time you have to spend, how much money it costs, how nice people are, what the chances are of injury, etc. To simplify things, I’m going to assume that each individual in the population has the same annoyance factors- everything is equally annoying to everyone. Obviously, people will choose whichever method involves the least annoyance. But one of the key factors in calculating annoyance is how crowded something is. The more people there are trying to use the same method, the more the method becomes overloaded, increasing wait times and so forth.

Whenever method A is a lot less annoying than method B, therefore, crowds of people will shift to A, making A more annoying and B less annoying. This process will go on until there’s no incentive for anyone to shift, and once that happens people switch between the methods in rough proportion to how many people are using them already due to personal factors, preserving relative proportions. Thus, in any large population which uses multiple methods for accomplishing the same goal, the population will redistribute itself among the methods until they are all equally annoying.

If you dehomogenize the population- if you get rid of the “everyone find everything equally annoying” assumption- CoA becomes more complicated but still basically correct. It still holds true for the average member of the population, but a highly atypical member of the population may find one method a lot less annoying than another one. Thus, if you wish to reduce annoying situations, you can go into situations which are highly atypical for people who share your characteristics. But being in a highly atypical environment is in and of itself perceived as annoying, at least by evolved creatures, so there’s a limit to how much annoyance you can get rid of.

If you have information asymmetry, you can get weak violations of CoA if small segments of the population know a method that’s not available to anyone else. If you don’t know of the existence of something, you can’t use it. However, information asymmetries will go to zero as the population goes to infinity, because it only takes one person to leak something onto the Internet, and so the larger the group membership (in absolute terms) the quicker the group is assimilated into the general population by a leak. Thus, as population goes to infinity, the number of people privy to secret information stays fairly constant, and so becomes an ever smaller portion of the population as a whole.

The Movies

Everyone knows that movies are fictional. There’s no particular reason why a movie would bear any resemblance whatsoever to reality; movies are designed to entertain, not to make predictions about things. Yet every time someone mentions future technology- nanotech, cloning, genetic engineering, neural interfaces- the very first thing most people think of is movies. A movie comparison isn’t just one possible reaction- it’s the first reaction, the one that kicks in by default. And once someone mentions the movie, people automatically start making other comparisons- the technology will do X like it was in the movie, it will be developed in manner Y like it was in the movie, and it will have effect on society Z like it did in the movie.

This mechanism works very well in everyday conversations between humans. You see a situation developing like a previous one, so you make an analogy. Thing X will probably happen next, then thing Y, then Z, just like it did last year. This reasoning usually works, because everyday social situations now are much the same as they were ten thousand years ago (see here for an excellent essay that explains this in detail). And so the human brain has been automatically trained, both through evolution and through years of experience, to auto-import concepts between analogous situations. Historically, this has even helped us with science and engineering- if you don’t know how to do something, you can look at something that seems similar, import concepts and see if they work.

The problem with doing this with movies is that movies haven’t been vetted by reality. If something actually happens, you at least know that it’s physically possible and has a decent likelihood. But a movie writer can throw concepts together with no concern whatsoever for whether anything would actually happen that way. A movie villain, or hero, can pull off a master plan that has dozens of possible flaws; this doesn’t seem implausible because each flaw isn’t that likely to cause failure. The cumulative effect of the flaws is to make the plan’s execution an impossibility or freak occurrence, but who cares? When you find these errors and point out how totally impossible movies are, most people will just shrug it off and say “It’s only a movie”, and then carry on trying to use the movie’s concepts to think. So it isn’t enough to realize consciously that movies are fake- if you want to do legitimate reasoning about the future, you have to change your thinking processes to automatically exclude anything imported from Hollywood.

Falsification and Proof

Humans have an annoying habit of dodging the truth by whatever means necessary. Even when the truth is blatantly obvious, we often persist in believing things that we wish to be true rather than believing things that are true. The ideas of “proof” and “disproof” are often used as convenient escape hatches- if we don’t want to believe something, we can always say “Oh, but you can’t prove that” and persist in believing it anyway. Or if we do want to believe it, we can just challenge people to prove it isn’t true. According to Bayes’ Theorem, nothing is ever “provable” or “disprovable”, and so this retort is both literally true and tends to put the opponent on the defensive.

The problem with this “argument” is that it carries zero information value. It is mathematically impossible to get a probability of 1 or 0 for statement X without assuming a probability of 1 or 0 somewhere within the probability calculation. And this holds true regardless of what X is- it doesn’t matter whether it talks about the growth of potted plants or the teleportation of qubits. Saying that something isn’t “proven” or “disproven” therefore doesn’t distinguish between different values of X, and so it is entirely useless for determining the actual probability of X. The flip side of this is that the “arguer” can be lazy, and not care at all about what X is, and can still babble on about “proof” and sound somewhat convincing.

But why should anyone even care about whether a statement has been “proven” or “disproven”? It sounds nice- gives “emotional fortitude” or whatever the term is- to say something must be true or cannot be true. It doesn’t have anything to do with whether we should care about it- after all, it is possible that we will all spontaneously turn into potted plants under the rules of quantum mechanics, but nobody has waking up as a petunia on their mind. The key question in determining relevance is “what is the differential between scenario A, where X is true, and scenario B, where X isn’t true?”. The inability to answer this key question, or even realize it is being asked, has probably led to more difficulties than every popular political scapegoat combined.

Free Will

“Free will” is often used as an excuse by various philosophers, theologists and religious leaders to “prove” that no computer could ever hope to match the power of the human brain. A computer, so the argument goes, has its actions predetermined by a set of rules (source code), while humans are free to make decisions based on whatever they feel will lead to the best possible future. In the language of physics, modern computers are deterministic, while humans are supposed to either have “souls” or be subject to weird quantum effects that classical computers can’t emulate. Even if you added a quantum RNG onto the computer, it would still be a linear, logical machine- every effect could be traced back to its cause.

Determinism is widely held to be incompatible with “free will”, because the idea of free will implies that you can’t know your choice until after you’ve made it, while determinism implies that every decision can be traced back to its cause. But there is a mathematical theorem called Rice’s Theorem that says that it is impossible to build a computer that will predict the output of an arbitrary Turing machine. In other words, given an arbitrary decision maker faced with a choice between A and B, it is computationally impossible to determine what the decision maker will choose in advance of running the program (in the same way it is impossible to determine whether a Turing machine halts), even if the universe is completely deterministic.

Quantum randomness is also widely held to be incompatible, because under quantum theory, it’s just some quantum switch making that choice between A and B and not you. But the brain is so hugely complex that it almost never comes down to a single quantum event- neural logic of some sort is usually involved. Therefore, the decision-maker actually exists within our neural circuitry, since any piece of circuitry forms a part of who we are, and so it is actually us making the decision rather than God playing dice (unless you believe that God deliberately manipulates quantum encounters in the brain, which would be a very interesting experiment to run).

Generalized Bayes’ Theorem

The Bayesian Probability Theorem is one of the most elegant mathematical statements ever produced by humankind. (For those not familiar with Bayes’ Theorem, please go here for an in-depth introduction.) But unfortunately, it can only evaluate the posterior probability of a single event. A real-life Bayesian analyzer or decision system is likely to include the probabilities for thousands of different events, and while these can all be computed by using Bayes’ Theorem to individually compute each result, this strikes me as mathematically inelegant. Therefore, I would like to present a generalization of the Theorem. Take a probability vector O that lists the prior probabilities for a set of mutually exclusive events A1 through An:

Then take an event P, and generate a vector P’ that lists the conditional probabilities of P given A1 through An:

The posterior probability of A1 through An given P will then be the probability vector:

where the operator in the numerator is the entrywise or Hadamard product and the operator in the denominator is the common dot product.

Proof: Take any event Ak from A1 through An. The posterior probability of Ak given P will then be, according to Bayes’ Theorem:

According to the above formula, it will be equal to:

The numerators are obviously the same, but notice that the p(P|AkC)*p(AkC) in Bayes’ Theorem can be extended, since the Aks are mutually exclusive:


Note that the chain of or statements excludes Ak. Using the distributive property of propositional logic:

Again, because the Aks are mutually exclusive:

And finally, converting back to the original form:

The two equations are hence equal, since the p(P|Ak) * p(Ak) term in the denominator fills in the missing term in the summation. Note that, in the simple case where the probability vector has two elements, this reduces to Bayes’ Theorem since p(B) = p(~A).

Human Equivalence

The simplest definition of human-equivalent AI is an AI that can do everything a human could do, given equivalent hardware. Human-equivalence is really a superset of Turing’s famous test for AI intelligence; a human-equivalent AI (given a human body) should be able to fool everyone into thinking it is human, even if they live with it and interact with it in person for years on end. However, the common implication of human equivalence is that an AI will be capable of doing only the things that a human can do. Thus, if we can develop a human-equivalent AI, the natural implication is that it will simply become another intelligent player on the world stage, perhaps with a few added abilities like large memory, quick recall and computer-like arithmetic skills.

But in order to master the enormous range of abilities that humans have, an AI would have to be very, very good at reprogramming itself. Computer code doesn’t just magically adapt to doing new things- it has to be written out, in full detail, every time you want the computer to do a new task. To do something as simple as baking a pie, the AI would have to implement the pie-baking code within the time it would take a human to do the same task (probably a few minutes or so). And if the AI can write a program to bake pie, then it also could write any number of programs to take over computers via the Internet and capture their processing power for the AI’s own use.

Even if this happened today, when we don’t have any nanotech assemblers sitting around in people’s houses waiting for a remote takeover, such an AI could wreak enormous amounts of havoc if it wasn’t specifically designed to be Friendly. The bandwidth alone used in taking over the Internet would result in the quick collapse of thousands of business systems worldwide, and control of the news networks could easily be used to send the human species into a panic. Even in the best-case scenario where the AI is magically shut down a few seconds afterward and so doesn’t get a chance to develop nanotechnology or other Earth-destroying weapons, trillions of dollars in damage would very quickly result.

Human-equivalence is thus well past the point where the AI has to be designed to be Friendly, if we don’t want to all wake up one morning and find that we have no cellphones, bank accounts, radio stations, websites, or computer-managed international shipping networks. It takes surprisingly little intelligence to wreak havoc upon the world- heck, even humans manage to do it, despite our laughable logical capabilities (how long does it take you to evaluate the statement !(((5 >= 3) | (sqrt(196) != 14)) && ((98*17)^2 / 19 > 215)))? Even general intelligence probably isn’t necessary, so long as the AI has the capability to design new code to actively evade human network security procedures.

Nanotechnology

There is a great deal of buzz in the media about nanotechnology, and most transhumanists would agree that nanotech is an extremely big deal. However, the vast majority of people watching the media, and even the people writing the media, seem to have no idea what exactly nanotechnology is, or the implications it has on human society.

When Eric Drexler first published Engines of Creation back in 1986, a huge number of new, interrelated ideas were introduced into memespace in very quick succession. We, as creatures of the savanna, aren’t used to this- new developments in prehistoric Africa tended to happen one at a time, and could eventually be explained using pre-existing knowledge. Without an existing reference point to triangulate to- who ever heard of self-replicating tiny machines?- people simply made one up. The easiest target was the word “nanotechnology”- since the root “nano” means “small”, “nanotechnology” must be any technology that exists on a small scale.

Once the triangulation was made, a whole bunch of other ideas that were floating around became tied to the concept of “small” as the key element. Visions of unlimited control over the structure of all matter are very attractive, and so everyone working with things on the nanoscale saw the obvious implications for marketing. Research and engineering projects that involved doing anything on the molecular level- no matter how trivial- rapidly picked up the buzzword, and soon even companies like HP were using it, primarily as an effort to sound “cool”.

For a thorough study of what exactly nanomachines and molecular manufacturing are capable of, I recommend Eric Drexler’s Engines of Creation, and Nanosystems, which deals with technical detains and specific proposals as to how to make nanotechnology work. Both are quite readable and are available freely online. In summary, however, the important concepts surrounding “nanotechnology” are-

  1. Molecular manufacturing. This means the capability to put atoms into place at arbitrary locations, giving us the ability to control the exact structure of matter down to the molecular level. Using MM, you can literally manufacture anything that’s physically possible, using nothing but the raw elements that make it up and a set of blueprints.
  2. Nanomachines. These are small, independent devices that can manipulate atoms and are themselves molecule-sized. Nanomachines will allow us to have modern technology with us, automatically, all the time, without having to carry it, maintain it or even know that it’s there.
  3. Self-replication. Self-replication will allow a machine to make copies of itself. Each of the copies can then make more copies, and so the population of machines (given suitable resources) can grow exponentially like any biological population. A self-replicating assembler will allow things to be made in very large quantities from a single original factory and a whole bunch of raw materials, because you only have to build the factory once.

While the various nano-scale research projects at universities, such as smaller microchips, are certainly interesting, the vast majority of them don’t have the power to change our lives in the way that any of these three could. The hijacking of “nanotechnology” to refer to these relatively unimportant matters was extremely predictable, and the same thing will happen to the word “Singularity” once it becomes cool enough to be used to market stuff. It’s truly amazing what lengths the brain will go to to keep itself ignorant.

Optimization Processes

As William Paley first pointed out in the early nineteenth century, complex patterns that serve a specific goal require some kind of cause. If the pattern just came about by luck, it’d be no different in origin than all the other random patterns in the universe, so there’d be no reason to think that it would be exceptionally good at fitting a set of criteria. These causes fall into four primary categories:

  1. Dumb luck. In a dumb luck process, candidate patterns are generated simply by chance, and then checked to see if they match the criteria. If they don’t, they get thrown away and another one gets picked. Dumb luck can never produce anything particularly complex, because the time it takes to produce a pattern of N bits scales with 2^N, so even for small N, the time required rapidly exceeds the time until the heat death of the universe.
  2. Evolution. Under evolution, a random pattern is picked at the beginning, and random mutations are made to this pattern. If the new pattern fits the criteria better than the original, the mutation is kept; if not, the mutation is thrown away. The power of evolution comes from being able to keep improvements in the previous generation, and so build up greater and greater complexity. Generating a complex pattern of N bits using evolution scales with N, but evolution is also hamstrung by the need to stick to local maxima, the large number of generations required, and the need to maintain existing complexity against degenerative mutations.
  3. Intelligence. Intelligence is a very complex subject, but designing a pattern to solve a problem seems to involve drawing on pre-existing ideas that are related to the problem, and then using logic and selection to figure out how to bring them together into a working pattern. I haven’t seen any studies dealing with how long it takes a human to design a pattern of N bits, but more importantly, the number of attempts required seems to scale with log(N). This is very important because attempts can only be done serially; after all, there’s no point in trying again with a new approach if you don’t know whether the approach you’re working on is viable. This, in practice, means that intelligence is hugely faster than evolution, requiring only a few generations of complete, working models to design machinery where evolution would require millions of generations.
  4. God. He is frequently invoked as an explanation for events that seem to be suited to a specific goal, or as an explanation for the origin of life. This achieves nothing at all, because you’re immediately faced with the question of where God’s complexity came from. The two answers to this that I’ve heard are “I don’t know” and “It was always there”, and if we’re going to accept these as viable answers to the complexity problem, we might as well say that the pattern in question was always there and be done with it.

The Importance of Politics

Politics, the art of getting and keeping power, is so old that it has been effectively programmed into our brains by evolution. It is not even unique to humans- most primates, and a large portion of other mammals, establish social structures and vie for power within them. Politics is distinct from most other human activities, in that the specifics of who’s controlling what change endlessly, but the nature of the game never changes at all. Most human endeavors follow a simple progression: think of goal A, accomplish goal A, forget about goal A. But politics, like sports, can never be finished because the players are frequently replaced and so there is no opportunity for a permanent structure to form. (Note that although abstract structures, such as the US Constitution, can survive for centuries, personal power structures such as Caesar’s Triumvirate never do.)

Politics thus acts as a huge time sink, soaking up the energy of billions of human brains, without ever accomplishing much. National politicians often acquire a reputation for corruption and power-mongering, and can be voted out of office, but politics also stretches down to the personal level, pervading every level of organization in society. Indeed, for those who don’t have much else to do (students, prisoners, society wives), politics becomes the main occupation of daily life. Studying these interactions, under the headings of sociology and evolutionary psychology, is fascinating, but we have far more important things to do as a species.

Historically, politics has had little influence over the development of the human species as a whole. Technological growth has remained remarkably predictable, and it has historically been a matter of little importance whether empire X or empire Y held power, as both usually were fairly similar. However, with modern technology, the concentration of influence allows governments to hold a great deal of power, usually without realizing it. You need a certain kind of society to develop technology- fairly open, with no secret police who are paranoid about new ideas, and rich enough to guarantee a good chunk of the populace a comfortable living. And so if we’re ever going to develop ultratechnology, we need to live in this kind of an environment so we’re not too busy worrying about bullets flying over our heads to think.

Most Americans, living comfortable lives, lose sight of the fact that peace and prosperity are a rare anomaly historically. It was only seventy years ago that the US was in the grip of a deep economic depression and breadlines were a common sight on the street. It was only a hundred and fifty years ago that the US was ravaged by a four-year-long civil war, killing a significant percentage of the population and laying waste to huge tracts of land. Modern-day first world countries have already done most of the work for us: we’ve managed to avoid famine, world war, bloody coups and economic crashes for over sixty years now. Therefore, the most important thing to handle is maintaining the standard of living that we have now, so that we can continue to progress.

Most transhumanist groups are small and not very political, and this is not likely to change in the near future, so realistically, we have little to no hope of altering history in an attempt to maintain a friendly environment. However, what we can do is to be aware of the international situation, be ready to react to any catastrophes, and most importantly have a backup plan in case something does go horrifically wrong. People are now recognizing the deadly threats associated with nanotechnology, AGI, genetics and other future technologies, and I thank the Lifeboat Foundation for trying to extend awareness of these threats. But we need to recognize that violence, poverty, and mayhem are not anomalies, and have actually been the norm for most of civilization’s history. What would we do tomorrow if there was a nuclear war? If the banks failed, and half of America went broke? (As a side note, half of America already is broke, having no net assets- we simply don’t realize it.) If the dollar crashed on the international markets and all money was suddenly worth 70% less? If there were another terror attack and martial law was declared? If there were an oil crisis and gas went to $12/gallon? There are obviously many more possible scenarios, but these questions should be answered, and they should be answered now. It’s not like there’s a shortage of deadly risks waiting to catch us off guard and stab us through the jugular.