Advertising

Suppose that I make widgets for a living. To make a widget, I need to buy five different kinds of supplies, from five different companies. I then need to process the parts in a factory, machine them, assemble them, package them, and transport them to the wholesaler so they can be sold. I also advertise in the local paper, to popularize my widgets and increase sales. If I have a bad year, and need to cut expenses:

- If I stop buying from any one of my five upstream suppliers, production will drop to zero and I’ll go broke.

- If I don’t pay the electric bill for the factory, the machines will stop running, production will drop to zero and I’ll go broke.

- If I don’t pay the salaries of the factory workers, they’ll quit, production will drop to zero and I’ll go broke.

- If I don’t pay the trucks to ship the product, or the water company, or the property taxes, etc…

The only real leeway is in the advertising budget. If I don’t advertise, sales will start to drop off, but that’s okay- I can always advertise later after the money supply starts to pick up again. In a conventional company, each product has a high unit cost; it costs a significant amount of the end price to get each individual product to the consumer. The unit cost generally has very little leeway, because if you cut one link in the production chain, the whole thing goes down and the company’s out of business.

In a “new economy” information company, on the other hand, most of the expense is in the capital cost of generating the information. Information, once produced, is ridiculously easy to distribute- all you need is a web server, or a $1 cardboard box and CD. The

only large unit cost is the advertising, because that’s the only expensive thing that’s needed to generate each additional sale. Advertising, and marketing in general, should therefore take up a much larger portion of the budget if the company is data-based. Data-based companies can also become so bloated that they stop producing anything of value, and continue coasting along by selling the same old information over and over again.

Terrorism Is Not An Existential Risk

Western culture has suffered from a chronic case of availability bias for the past six years, because of one, single event: September 11th. Terrorism is now a popular topic; reporters talk about it all the time, because they know the public will pay attention. Terrorism is the primary justification for the Department of Homeland Security, which currently has a $45 billion annual budget. Terrorism was the primary cause of the Iraq War, which is now *the* major issue in American politics, and soaks up cash to the tune of more than $100 billion a year. Even in transhumanist circles, “bioterrorism” or “nuclear terrorism” is commonly cited as a serious risk to human civilization. To an outsider, it would seem like terrorism is a constant, ongoing problem, like in the Israeli border towns, which are shelled by random rocket fire year-round.

Looking at the historical record, we have September 11th, and, and… that’s pretty much it. September 11th is the only large-scale terrorist attack on US soil so far this decade. Nothing else has happened during the past six years to justify the continued spending of hundreds of billions of dollars on counterterrorism efforts. Nothing else has happened during the past six years to justify the prominence of terrorism as a political issue. Almost *all* of the focus on terrorism has resulted from that one, single incident- if it weren’t for 9/11, terrorism would be somewhere down on the list between OSHA and meat inspectors, just like it was before 9/11.

I commonly see “terrorism” being brought up as a serious risk to life-as-we-know-it. For any given terrorist group, if ultratechnologies like molecular manufacturing become widely available, they do become much more dangerous. But the common, present-day scenarios- “terrorists plant a dirty bomb in Chicago” or “terrorists spread smallpox into the New York City water supply”- are totally imaginary. To be blunt, we simply made them up. Nobody has ever set off a dirty bomb with the intent of killing civilians. Nobody has built a dirty bomb with the intent of killing civilians. Nobody has been caught buying radioactive material for the purpose of killing civilians. Nobody has launched a large-scale biowarfare attack. Nobody has procured the necessary equipment to launch such an attack. Nobody has been found with serious, industrially viable blueprints to build a nuclear weapon. The list just goes on and on and on….

To be clear, many of these scenarios are plausible, and if warning signs of an attack start to appear, we should certainly respond. We should also have contingency plans in place, to deal with any plagues or radioactive releases that do happen. But by focusing on overly specific action-movie scenarios, we may be neglecting threats which are much more general, and therefore much more likely. A smallpox epidemic, or a rogue MNT device, or a nuclear weapons factory, could come from anywhere. How would it happen? I don’t know. I would never have thought of terrorism as a possible cause seven years ago, so it’s very likely that there is some cause I can’t think of today.

Black Holes and Particle Accelerators

A general summary of the recent discussion of the existential threat from colliders, such as the LHC.

- The Large Hadron Collider is the latest shiny new particle accelerator in Europe, scheduled to come online in mid-2008. It will be able to collide protons at an energy of around 14 TeV, and heavy nuclei at much higher energies (over one thousand TeV).

- The high-energy collisions in the LHC might produce exotic new particles, such as microscopic black holes or stable strangelets. These particles could interact with regular matter and start a chain reaction, which wouldn’t stop until the Earth was destroyed.

- Particle physics is a multi-billion dollar field of research, employing thousands of people worldwide. If there is a possibility of existential risk from doing high-energy physics, it will be very difficult to mitigate this risk without clear evidence and a lot of lobbying power.

-  Cosmic rays have been observed with energies in the 3*10^8 TeV range, which is five orders of magnitude larger than anything the LHC might produce. These cosmic rays have been bombarding us since the planet was formed four billion years ago, and so we know that they are not dangerous.

- Standard general relativity predicts that a black hole must have a minimum mass on the order of the Planck energy, or 2.4*10^15 TeV. Such a black hole would decay almost instantly, and could not even be directly measured (it would decay before other particles could interact with it, much like the top quark). A black hole with a lifetime of one nanosecond, under general relativity, requires a total energy of 1.3*10^26 TeV, or around five thousand megatons of TNT.

- We know that general relativity breaks down at the quantum scale; however, we don’t have a good theory of quantum gravity that can predict where it breaks down. Some theories predict that general relativity breaks down very quickly at the quantum scale, and black holes with a mass in the TeV range should last around 10^-26 seconds, rather than the 10^-90 seconds predicted by standard theory. If this holds, the LHC could produce and detect miniature black holes, although they will be far too short-lived to represent a threat.

-  Although cosmic rays are similar to LHC collisions, there are some differences; a cosmic ray has net momentum relative to the Earth, while a proton collision does not. Cosmic rays also collide with nitrogen or oxygen atoms in the upper atmosphere, while LHC protons collide only with each other.

- If black holes are generated by cosmic ray collisions, they must be incapable of eating an entire planet or star. If a star was eaten by a black hole, the matter of the star would heat up, creating a huge radiation source with a unique spectral signature. No such sources have been observed, in this galaxy or any other one.

Effective Philanthropy

Suppose that I donate $10 to the Red Cross. The money will be put into a bank account, along with millions of other donations, before being distributed to the various departments. It will then be split up, and sent around the world to dozens of different locations. Hopefully, somewhere along the line, it will be used to help somebody, somewhere, with something-or-other. The causal chain is there, between the donation and the benefit, but it’s so intertwined and mixed up that our brains can’t keep track of it. That is, if we even know what the chain is; the Red Cross lists donations under broad headings, such as “measles initiative” and “international response fund”, that don’t tell us what the money is buying. Most charities don’t even do that much- the money just goes into a general slush fund.

With transhumanist organizations, the problem is even worse, as transhumanist technologies are so complicated that deriving a benefit may require hundreds of different links between the donation and the result. Rationally, contributing to just about any transhumanist organization has a higher expected utility than contributing to the Red Cross. However, our brains do not follow the laws of rationality; when we make an effort, and see no benefit, and make another effort, and see no benefit, the short-term feedback system tries to shut down whichever part of the brain is making the effort. Therefore, I propose that every transhumanist organization which relies upon donations should put some percentage of the money, say 10%, towards something which is near-term, simple, and obviously beneficial. Some sort of easily understood benefit is necessary to get non-transhumanists to donate, and even experienced transhumanists would probably donate more if the money went to something concrete. After all, even if *transhumanism* is the best thing since sliced bread, there’s no guarantee any particular organization is actually helping the Cause ™.

Save The Children ™

At the risk of getting burned for heresy, I’d like to ask: Why should we value the lives of children so much more than other people’s? There’s an obvious reason for it in evolutionary psychology, but my model predicts that we should have come up with dozens of rational-sounding justifications for it, like we did for meat-eating and mindless entertainment. Surely someone, somewhere, has come up with a nice-sounding, pseudoscientific explanation of why it is worthwhile to go to such extreme measures as putting blocking software on all public computers to “protect” children. Congress, which is well-known for passing verbose book-length bills, didn’t include any justification for the Child Online Protection Act in the bill; it simply asserted that the “protection of minors” is a “compelling government interest”. Even websites against it rarely question the premise of needing to “protect children”; they simply argued that it was ineffective, had undesirable side effects or violated the First Amendment.

Dispersion Bias

If you drive home from work, and find a large rock lying in the middle of the road, your brain will automatically file it into the “large rock” category. You can reasonably expect a large rock to be quite heavy, hard, and have almost no value. If you find a mountainside with ten thousand large rocks, your brain will not automatically multiply the original rock’s characteristics by ten thousand. “Ten thousand” or “a hundred thousand” large rocks just look like “a lot”; if there’s any feedback from the sheer number of large rocks, it will come from concepts which apply to the single large rock- you expect ten thousand large rocks to be heavy, hard, and have almost no value just like the original rock.

Those ten thousand rocks, if their chemical composition is typical for the Earth’s crust, contain roughly a troy ounce of gold. Yet we don’t think of “gold” when we imagine rocks, no matter how big you make the numbers; even if we could somehow visualize a googol rocks, gold would be the last thing from our minds. A troy ounce of gold is a nice, neat, single object that we can file away. A tiny flake of gold embedded in several tons of quartz isn’t worth noticing, and so we don’t notice it, even if there are a billion of them- the “notice/don’t notice” filter doesn’t go away if you add on enough zeroes.

Exploiting the mental disjoint between a single thingy and ten million thingies is a popular tool of spammers, get-rich-quick schemers, and other crooks. If you lose ten bucks to a scam, or spend thirty seconds to delete junk mail, it isn’t worth taking action or filing a complaint. The net effect is a huge drain on the world economy, with billions of dollars wasted every year, but we don’t visualize billions of dollars; we visualize a loss of ten bucks or a one-minute inconvenience, because that’s all any single person ever sees. Using this power for Good And Not Evil ™ could have an enormous impact on the world. Distributed computing projects already harness as many FLOPs as the largest supercomputers, and the government sneaks away with so much money, NASA’s $17 billion budget looks small. Imagine if there was a little “donate ten bucks to charity” box on tax returns.

The Optimistic Scenario

When a brain tries to analyze the outcomes of an event, it tends to immediately leap to the most optimistic scenario. If you apply for a job, the first thing you tend to envision is getting the job, plus an extra benefits package and the promise of a promotion in six months. If you meet someone new, they’ll turn out to share all your interests and you’ll be friends for decades. If you are awaiting the results of a medical test, it will turn out that you are eligible to earn $5,000 by participating in a research program with no risk and little effort required. And on and on it goes.

The best-case scenario is usually painfully unlikely, so thinking about the situation a bit more will force the brain to think of a second scenario. This one isn’t as good as the first, but doesn’t usually end in disaster and can be a comfortable “fallback position” to think about if the first scenario doesn’t work out. The brain will then divide incoming evidence into support for the first hypothesis and support for the second hypothesis; it may take a great deal of evidence to create the mental image of a none-of-the-above, third scenario which is not a minor modification of the original two.

The mental imagery of two competing scenarios creates the illusion that the two are roughly comparable in probability. This is usually not so- the optimistic scenario may have a likelihood of one in a million, or it may even be physically impossible. To counteract this, you can try and envision- in detail!- lots of possible scenarios in which different things go wrong. You can also try and deliberately not think about the optimistic scenario, as it’s a waste of mental resources to dwell on something which probably won’t happen.

Practice Makes Failure

Say that you want to become a really good tennis player. To become good at tennis, you have to practice- you have to do the same things over and over and over, so that you can identify any mistakes you are making and train your body to make the right movements. But suppose that your idea of tennis is to hit the ball as far as you possibly can. You can spend months practicing this, only to find out that you’ve become a worse tennis player than you were when you started. In fact, no amount of practice hitting the ball hard can make you a really good tennis player.

Practice is, in effect, a bet on your current state of knowledge. If you want to become better, there must exist a discrepancy between where you are now- your actual state- and where you want to go, your ideal state. The function of practice is to move you from your current state to your desired state; practice allows us to examine ourselves and examine our models and see why they fail to match up. Once the failure is identified, you can work to eliminate it, and then identify a new failure. Ultimately, practice will bring you arbitrarily close to where you are steering yourself (within the bounds of the laws of physics).

In order to start practicing, your goal has to be specific enough (eg, “get a job teaching” or “find a chemical that will separate water and ethanol”) to create a precise image of where you want to be. But normally, any goal specific enough to practice is only a subgoal to something larger and more diffuse (“make more money” or “solve the energy crisis”). And so the expected utility of your practice depends on the probability of the statement “achieving the subgoal will work towards achieving the supergoal”. This statement is empirically falsifiable, and obeys the normal laws of rationality- being able to swing your arm three times a second will not help you with your goal of having a happy life in suburbia.

Considering all the emphasis the human species has put on practice, and the years of effort most of us spend in training, we could probably put a little more work towards finding out whether what we are practicing makes sense. A cheap way to do this is through external observation; simply pick some friends who have already achieved the goal you’re trying to work towards, and check in with them to see if you’re improving. If we want to get where we are going, we need to look outside and see where we are headed. We also need to be willing to adjust course if necessary (remember the bias towards honoring sunk costs).

The Prior Information Problem

Suppose that I have strong Bayesian evidence for an alien invasion tomorrow. Obviously, nobody is going to believe me if I call up the BBC and tell them that the aliens are coming. But I still want to have proof that I knew about it ahead of time, so that we can predict and head off future alien invasions. How can I prove- after it has already happened- that I knew about the invasion ahead of time? In general, how can you present Bayesian evidence for the hypothesis “I predicted XYZ” after XYZ has happened?

The problem here is that the amount of information available increases with time. If I wanted to prove that I was alive today, I could simply hold up a copy of today’s New York Times. This can’t be faked, because the information content of today’s New York Times is only available today- it wasn’t available in 1970. But you cannot repeat the process in reverse; if you want to prove you were alive in 1970, you can’t just hold up a copy of the 1970 New York Times, because any idiot can go look up that issue in the archives.

The only solution I have been able to think of is for the prior knowledge to be verified by a trusted third party. Note that trust is essential; we generally trust the postal service, but there’s nothing stopping them (in principle) from stamping a 1970 postmark on an old letter. Does anyone have a better solution?

The Relatedness Factor

There is a complex functional adaptation in humans for recognizing human faces- to function, we need to be able to tell each other apart. Faces seem so natural to us that we forget how tiny the differences we’re detecting are. The difference between the facial expressions for “happy” and “angry” is less than two centimeters, yet most people can detect these expressions from over five meters away. People without this adaptation have to use their general image-processing machinery for faces; “identifying a person” falls into the same mental category as “identifying a sheep”. Most of us couldn’t keep track of one specific sheep even if we had to. It would certainly be much harder than keeping track of one specific person, or even one specific car- sheep just all look alike.

The information required, in terms of entropy or Kolmogorov complexity, to specify a facial expression is roughly the same for humans, sheep, and most other mammals. Sheep faces, for the most part, act like human faces- they have the same basic layout and are deformed in the same manner. The input to the brain is similar in both cases. The difference in perception arises when the information is processed; the sheep-face information just gets thrown on the garbage heap, while the human-face information is relayed to other systems. Even if your frontal cortex knows you are supposed to remember this particular sheep, the message isn’t automatically relayed to the rest of the brain. The visual cortex evolved in an environment where you couldn’t make conscious decisions, so the only built-in way to adapt to new tasks was Pavlovian conditioning- if you see the sheep over and over again, it’s probably important. Because conscious decision-making is so new, this limitation applies to most of our brain functions; you can’t just decide to make your memories of “playing a game” or “researching relativity” available, you have to train yourself.

Once you have enough training for a basic familiarity with something, your brain will start to dissect it into its component building blocks. These building blocks are not necessarily physical; they are made up of the concepts that associate to our memories. A car, once you first encounter it, may be just “a thing that takes people places”; as you gain more experience, you associate it to smaller-scale concepts like “gasoline” and “exhaust”, as well as larger-scale concepts like “traffic jam” and “rules of the road”. The things we think about in our everyday life- the things we are most familiar with- may have hundreds of different building blocks, all stored in our subconscious ready for use. If you light up a map of thingspace with the stuff we think about in our everyday lives, the stuff that glows most brightly will bleed into adjacent regions. I, personally, have never seen a car that makes toast, but I can easily imagine one by stacking “toaster” on top of “car”, inserting a toaster into the list of other car building-blocks like “dashboard” and “cup holder”. But if you try to stack two new concepts, like “tangent vectors” and “directional derivatives”, you run into a roadblock- you haven’t built up the network of associations up yet, and so the ways in which they are connected are still hidden and non-obvious.

Following the inference chain backwards, there should be a strong correlation between how much detail we observe (or, equivalently, how much variability there is in the category, or how many bits of information we process) and how closely something is related to our everyday life. Most of us see cars every day, and so we can readily distinguish between a Ford and a Ferrari. When we see a car, we instantly notice what color it is, what condition it’s in, how many people it seats, roughly what the mass is, and how much it would cost to buy. Trains, a much less frequent mode of transportation, have fewer details attached; I’d only know the mass of a train, or how many people it sits, or how powerful the engine is to within an order of magnitude, even if I studied it for a few minutes. When you get to totally obsolescent forms of travel, such as the horse-and-buggy, you don’t pick up any details at all- I have no clue what the defining characteristics of a particular horse-and-buggy setup are, they all look the same. Remember, though, that the variation is there regardless of whether you perceive it. People had ways of telling horse-and-buggy setups apart back in the olden days, but if you rattled them off to me, I’d instantly forget them; it would just bounce off a brick wall, as if you were describing the intricacies of a particular can of garbage.