Expert Bribery

Bribing an expert is, in general, not illegal.

This seems to be an important principle, so it will be stated twice for emphasis: Bribing an expert is, in general, not illegal. It’s illegal to bribe public servants and union representatives, and under some circumstances it’s illegal to bribe a corporate employee (without the employer’s consent). But there’s no law against giving doctors, lawyers, scientists, engineers, etc. large sums of money in exchange for their support of you, your company or your product.

The public in general seems to place a nontrivial amount of confidence in experts of various sorts: expert witnesses at trials, expert professionals on commercials, expert guests on TV shows. But it is, not only commonplace, but standard procedure for all of these people to have their pockets lined in exchange for supporting a certain point of view. Even on an international scale, the only thing preventing ExxonMobil from just writing checks to all the world’s climatologists is the large fraction of scientists who really do value ethics over money, and the obvious negative PR that would result.

Supposed To

On the general principle that our actions shouldn’t be caused by mysterious unnamed forces, I propose that we deliberately eliminate the words “supposed to” from daily conversation. Consider the sentence:

“We’re supposed to meet tomorrow at noon.”

It’s not at all clear who desires us to meet at noon- Bill? Steve? The government? God? This sort of sentence actually makes the mind projection fallacy worse, by imbuing objects with a supposed-to-ness, rather than considering how agents are imposing their desires on objects. If I think we should all meet at 11, it adds an extra cost for me to argue with a “supposed to”, as the other party in the argument is removed from explicit consideration.

“We should meet tomorrow at noon.”

This is a clearer sentence; the implication is that I, specifically, want us to meet tomorrow at noon.

“Mike wants us to meet tomorrow at noon.”

This is clearer still, as the agent doing the supposing is named explicitly. If you don’t like it, you can start thinking about Mike’s psychology, and how to convince him otherwise.

A Quick Guide to Handling the Recession

Note that kids, although they obviously have emotional implications far beyond those of “stuff”, really need be included among things you shouldn’t get if you can’t afford them. The average cost of raising a child in America is well over $100K; this doesn’t include college, or the huge subsidy given to every child in the form of the public school system.

The Repugnant Hypothesis

In population ethics, a linear summing over utilities leads you to the Repugnant Conclusion: if you get 3^^^^3 people together, and have them all experience the smallest possible amount of joy, the expected positive utility will exceed that of our entire civilization. This is the negation of Pascal’s Mugging, although it has wider implications: if true, it means that we should fill the universe with cheaply reproducible happiness, even if we have to eliminate most of the happiness-related mind states that humans currently experience. Peter de Blanc has proven that, if you have an unbounded utility function, you will always wind up in situations like these, and so the problem must be with our estimates of our utility functions and not our ideas about happiness.

However, given the current state of the world, we should still consider the Repugnant Hypothesis: What if the sum of utility over most human lives is negative, rather than positive? In the real world, even if this is true, it shouldn’t matter that much; we still have hope. But it could have raised some awkward questions if we were rational enough to consider it in, eg., AD 1200. Should we no longer have children? Should we try and kill as many people as possible, except for those who are known to be happy? How does the utility created by destroying the planet balance against the potential future utility, if life in the future turned out to be better?

Singularity Summit 2008 Registration Now Open

Singularity Summit 2008: Opportunity, Risk, Leadership takes place October 25 at the intimate Montgomery Theater in San Jose, CA, the Singularity Institute for Artificial Intelligence announced today. Now in its third year, the Singularity Summit gathers the smartest people around to explore the biggest idea of our time: the Singularity.

Keynotes will include Ray Kurzweil, updating his predictions in The Singularity is Near, and Intel CTO Justin Rattner, who will examine the Singularity’s plausibility. At the Intel Developer Forum on August 21, 2008, he explained why he thinks the gap between humans and machines will close by 2050. “Rather than look back, we’re going to look forward 40 years,” said Rattner. “It’s in that future where many people think that machine intelligence will surpass human intelligence.”

“The acceleration of technological progress has been the central feature of this century,” said computer scientist Dr. Vernor Vinge in a seminal paper in 1993. “We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.”

Singularity Summit 2008 will feature an impressive lineup:

* Dr. Ruzena Bajcsy, pioneering AI and robotics researcher
* Dr. Eric Baum, AI researcher, author of What is Thought?
* Marshall Brain, founder of, author of Robotic Nation
* Dr. Cynthia Breazeal, robotics professor at MIT, creator of Kismet
* Dr. Peter Diamandis, chair and CEO of X PRIZE Foundation
* Esther Dyson, entrepreneur, investor, philanthropist
* Dr. Pete Estep, chair and CSO of Innerspace Foundation
* Dr. Neil Gershenfeld, director of MIT Center for Bits and Atoms, author of Fab
* Dr. Ben Goertzel, CEO of Novamente, director of research at SIAI
* John Horgan, science journalist, author of The Undiscovered Mind
* Ray Kurzweil, CEO of Kurzweil Technologies, author of The Singularity is Near
* Dr. James Miller, author of forthcoming book on Singularity economics
* Dr. Marvin Minsky, one of AI’s founding fathers, author of The Emotion Machine
* Dr. Dharmendra Modha, cognitive computing lead at IBM Almaden Research Center
* Bob Pisani, news correspondent for financial news network CNBC
* Justin Rattner, VP and CTO of Intel Corporation
* Nova Spivack, CEO of Radar Networks, creator of Twine semantic-web application
* Peter Thiel, president of Clarium, managing partner of Founders Fund
* Dr. Vernor Vinge, author of original paper on the technological Singularity
* Eliezer Yudkowsky, research fellow at SIAI, author of Creating Friendly AI
* Glenn Zorpette, executive editor of IEEE Spectrum

Registration details are available at

About the Singularity Summit

Each year, the Singularity Summit attracts a unique audience to the Bay Area, with visionaries from business, science, technology, philanthropy, the arts, and more. Participants learn where humanity is headed, meet the people leading the way, and leave inspired to create a better world. “The Singularity Summit is the premier conference on the Singularity,” Kurzweil said. “As we get closer to the Singularity, each year’s conference is better than the last.”

The Summit was founded in 2006 by long-term philanthropy executive Tyler Emerson, inventor Ray Kurzweil, and investor Peter Thiel. Its purpose is to bring together and build a visionary community to further dialogue and action on complex, long-term issues that may transform the world. Its host organization is Singularity Institute for Artificial Intelligence, a 501(c)(3) nonprofit organization studying the benefits and risks of advanced artificial intelligence systems.

Singularity Summit 2008 partners include Clarium Capital, Cartmell Holdings, Twine, Powerset, United Therapeutics,, IEEE Spectrum, DFJ, X PRIZE Foundation, Long Now Foundation, Foresight Nanotech Institute, Novamente, SciVestor, Robotics Trends, and MINE.


Singularity Summit
Tyler Emerson, 650-353-6063

Deleting Writing

The principle behind most CAPTCHAs is the pattern recognition capability of the human brain; we can see patterns and separate them instinctively, even when contemporary software has a hard time doing so. A modern computer, for instance, would have a hard time reading this text, but humans can do so easily, by recognizing the pattern of horizontal lines and separating them from the letters.

As it turns out, this makes deleting written information quite difficult; even if you cross something out, scribble over it, and strike out every word, it’s usually still legible. The simplest solution to this is to write over the text with a random string of letters, preferably several times; since earlier letters are indistinguishable from later letters, this makes reading the original text impossible very quickly. A five-letter word overwritten three times, for instance, would have 5^4 = 625 possible values.

Rationalizing Life

Suppose that you got a job as a Wall Street financier, worked eighty-hour weeks for ten years to make a lot of money, and finally retired with a huge pile of cash. You wanted to spend the money you’d worked so hard for, and so you went out and bought a large house in a nice neighborhood. You moved in, bought furniture, and lived there for a while, but due to a mistake in the electrical wiring, the entire house and everything in it burned to the ground. What do you do next?

If you haven’t studied rationality, you’ll immediately begin to make up reasons for why your house burning down was, not only a good thing, but necessary for you to live a happy life. You can now sue the electric company and make a ton of money. You can collect on your expensive insurance policy and make a ton of money. You can move somewhere else and meet new people. You can travel the world and see how the other half lives. It doesn’t matter what the specifics are- the human brain is amazingly good at inventing reasons for why bad things happen.

All else being equal, most people would prefer to live in a world where bad things never happen at all. Such a world is blatantly imaginary, so we like to imagine that the world is unbalanced, but fair; X amount of bad can happen, so long as 2X amount of good happens later to make up for it. Living in a world governed by the laws of physics- where X amount of bad can happen this year, and Y amount of good can happen next year, and both are largely random and unrelated unless you keep track of quarks- seems to scare people.

This type of rationalization seems to occur fairly frequently, even among people who have no explicit supernatural beliefs. The primary reason for this seems to be the trickiness of causality; when we say “A caused B”, we usually mean “~A -> ~B”, and there are almost always ten gazillion values for A that can be inserted to fit this requirement. I don’t know how common this is among, say, MIT graduates, but it’s certainly common among MIT rejects.

I haven’t found any explicit guide for how to dissolve this class of mistakes. For most irrationalities, you can make at least some progress by realizing that you’re being irrational, seeing a few examples where irrationality leads to bad outcomes, and thinking of new ways to study the world. But this type seems to be difficult to handle psychologically. To quote Eliezer:

“What would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another as you watched, unless you yourself died first? (…) I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?”

Trading Strategies

Everyone has their own pet strategy for making money on Wall Street, but so far, I haven’t seen any systematic analysis of which ones work. People, as often as not, just guess at random and hope for the best. To help eliminate this gaping void in our knowledge, I propose a competition in trading strategies. Proposed rules:

- Any publicly traded asset is fair game. This includes stocks, bonds, commodities, derivatives, etc.

- You cannot borrow more money to invest; you’re stuck with what you have at the start. Leverage is cheating, and it’s also dangerous (margin calls were a primary cause of the Great Depression).

- The winning strategy is the one that makes the most money from a starting asset base, over a set time period, using historical data.

- There should be different categories for different starting asset bases ($100K, $1M, $10M, etc.) and different time scales (2 years, 5 years, etc.)

- Strategies may not use hindsight. For this reason, naming specific companies or assets (eg., GE stock or IBM bonds) is not allowed. Absolute times (eg., buy tech stock in 1996 and sell in 2000) are also not allowed.

- Strategies must be sane enough to be describable in a dozen or so pages of text, which must be understandable by anyone reasonably well versed in economics. This does not include citations, references to prior work, justification, the research process used, etc., just the details of how to make money.

I will personally fund such a competition with $250, so long as the rules are reasonably similar. Any other takers, or rules I should include?

EDIT: I am aware that many traders put a lot of emphasis on “instinct” or having a “feel for the market”. Large-scale markets developed too recently to allow for the evolution of specialized adaptations, so our gut feelings are more than likely to be wildly inaccurate.