Rationality is the process of deriving true beliefs from observations. We know of a number of useful heuristics which can help us derive true beliefs, but how are these rules generated? Humanity didn’t start off knowing that postulating spirits as an explanation was, in general, a bad way of finding things out. The two main general optimization techniques- trial-and-error and natural selection- both require an enormous number of trials (O(2^n) and O(n), respectively). It took us thousands of years, and an enormous collaborative effort, to stumble upon something as low-complexity as the scientific method. Trying to derive anything more complicated by just guessing and checking is probably hopeless.

Eliezer’s OB posts have touched upon this problem before, but so far as I am aware, there are no published meta-rationality heuristics of any significant complexity. To name a simple example of such a heuristic, studying lots of cognitive science will give you more knowledge as to how the brain can fail, which leads to the discovery of ways to correct such failures. As another example, doing a stack trace of the brain whenever a wrong belief is corrected will, if successful, identify the specific pattern of thought that caused the wrong belief. I’m sure there are more to be found, somewhere in idea space.

Personal Communications

(DISCLAIMER: This blog is not affiliated with Skype in any manner whatsoever.) Skype is now offering free videoconferencing, free calls to anyone using Skype, and unlimited calling to anywhere in the US and Canada for $3 a month. Landline providers regularly charge $40 a month or more for equivalent service; minimal cellphone plans usually start at $30 a month. I haven’t done a survey of other VoIP providers, but their costs are probably similar to Skype’s, and if they aren’t, they soon will be due to price competition. I am, quite frankly, amazed that otherwise technically literate people (including myself, until recently) are unaware of this. One of the classic hallmarks of “the future” in Hollywood is that everyone can videoconference from anywhere on the planet; didn’t they get the memo?

New Mailing List

To quote Rolf Nelson:

“Announcement: “fai-logistics” is a new archived discussion group to discuss the most effective ways of aiding the development of Friendly AI. It is not a discussion group for any other aspects of AGI.

Membership criteria: You must be serious about working towards ensuring that the Friendliness problem of AGI gets solved. In other words, if you’re only reading this because you’re bored and looking for something to do with your time, don’t bother joining. Also, please understand the difference between logistics and direct solutions; other lists exist if you wish to bring up your pet theory of how to implement FAI (or, for that matter, explain why you personally believe FAI is unnecessary.)

To join, send me an email with a short description of why I should believe you are who you claim you are, and why I should believe you fit the above membership criteria. (Probably any past or current SIAI staff or donors who want to join would automatically meet the membership criteria, any others will probably be decided on a case-by-case basis, depending on what kind of mood I’m in that day.)

Disclaimer: This new discussion group is not affiliated with, or endorsed by, the SIAI.

List administration: Rolf Nelson is the sole administrator and Self-Appointed List Tyrant.”

FAI-logistics is hosted by Google Groups. Contact Rolf Nelson for further information.

Science Prediction Markets

The market for crackpot science is huge: just look at how many people believe in creationism, even though it’s been thoroughly debunked for over a hundred and fifty years. Currently, this market only exists in memetic terms- people try to convince each other of ideas they believe are right, and the most convincing idea wins. Why not use the same concept to set up an actual, money-based market, like the prediction markets over at Intrade?

The James Randi Educational Foundation has already set up a protocol for testing new ideas, as part of their million-dollar paranormal challenge. A science-based futures contract should be fairly simple to implement, as long as every theory can be rigorously tested in this manner. If the test shows a positive result, one party gets $10 or whatever the contract amount is; if the test cannot be performed or shows a negative result, neither party gets anything. Is there anyone with a few tens of thousands in capital who would like to implement this?

Effective Adhocracy

DISCLAIMER: I am a great admirer of, and a donor to, both the Lifeboat Foundation and the Singularity Institute. I have nothing against either organization, or any of the people involved. However, I do feel that we need to implement a more effective overall policy.

Transhumanism, as it currently stands, is the ultimate adhocracy. There’s little central authority, and no overarching plan to bring the planet from where it stands now into a happy, utopian future. This is, perhaps, a good thing; we have all seen overly rigid, overly broad plans collapse when a Black Swan comes along. However, transhumanism cannot continue to operate in this manner, and simultaneously handle large ultratechnology projects.

Open-source software projects, which do manage to get things done, are an excellent example of why this type of system requires certain starting conditions. A typical open-source project has several full-time or highly dedicated developers, a larger group of people who make the occasional contribution, an even larger “following” of hundreds or thousands of people who participate in forums and such, and a userbase that can range from tens of thousands to millions. There is little formal hierarchy, and people can choose to work on whatever they want to. Many people were amazed when this structure became common during the 1990s, but we know from experience that it works.

However, open-source projects naturally fulfill a number of preconditions, none of which are currently met by transhumanism. To name a few of the more important ones:

1). Any given open-source project has a single, clearly defined database of code that everyone contributes to. At most, there may be several alternative versions, or a “stable” and a “beta” version. For the most part, everyone has the same knowledge base. This does not apply to transhumanism, or to any of its variants. There’s no central database of knowledge, no great repository of wisdom which aspiring students can learn from, no great shelf of manuscripts which have been vetted and checked for accuracy and completeness. The end result is that people are still arguing about things which were effectively resolved back in 2002.

Eli’s Overcoming Bias posts contain relatively little unpublished material; what makes them remarkable is their thorough systemizing of bits and pieces of disjointed knowledge. While I still dispute their effectiveness as a recruiting tool, they should be extremely effective in bringing everyone up to the same basic knowledge level. We definitely need to do more on this front; SIAI has an opening for a Senior Science Writer, but this is a full-time position, so the qualifications required are rather hefty. In the meantime, I suggest that SIAI or someone else hire transhumanist freelance bloggers on a part-time or volunteer basis, to write about already-researched material which needs to be formalized and systemized. Many such bloggers are quite articulate and knowledgeable about the subject matter, and until we can find someone to do this full-time, it seems like the only alternative to further tying up Eliezer. Due to unavoidable external circumstances, I am unable to participate in such an effort, at least for the time being.

As a further interim measure, there are already many informal collections of transhumanist-themed essays, scattered around the Internet. It would be fairly easy to, with the author’s permission, download these, and then store them on a central server for public access. I have done this myself for archival purposes, in case of nuclear war or another Internet-destroying catastrophe, but I lack the time required to make such a database up-to-date and easily accessible.

2). Open-source projects are not usually vulnerable to sudden, nonrecoverable catastrophe caused by maliciousness or incompetence. The standard open-source security policy is “with enough eyes, all bugs are shallow”; i.e., if there is a security hole, people will see it and fix it if they’re allowed to look at the code. This works well enough for protecting user’s PCs; it will not work with ultratechnology, where a single slip can cause global catastrophe.

Luckily, the military has already invented a reasonable system for handling secret information: keep it locked up in a vault somewhere, and only admit people who have proven themselves trustworthy. Anything important enough to be kept hush-hush should be formally stamped “SECRET” and thrown in an RSA 4096-bit-encrypted database. The alternative is leaking bits and pieces of supposedly secret information all over the place. Human brains simply aren’t reliable enough to keep track of ten thousand bits of secret information at once, and people who attempt to do so (e.g., spies) are known to have high failure rates.

3). Any large-scale open source project will have some form of TODO list, with high-priority, medium-priority and low-priority tasks. Identifying which tasks need the most attention is difficult, but it’s a lot better than releasing version 1.2 with a bug that overwrites the hard drive. Existential risk, which is far more serious than most PC data, is currently handled very poorly by comparison. Among intellectuals, the same amount of attention is commonly given to asteroid impacts and nanotechnological disasters, despite the six-orders-of-magnitude-plus difference in probability; in the popular media, the disparity is even more extreme. I have already written about this subject, and I plan to revisit the area more formally when I have more time available.

4). Open-source projects are based off of computer programming languages, which anyone can learn about fairly easily. There are thousands of professional programmers in the US alone, and amateur programmers probably outnumber them many times over. By contrast, a random Joe SL1 or Jimmy SL2 would need to spend years covering large inferential distances before publishing original research papers. This goes for everyone, no matter how intelligent; I’m quite confident that Eliezer 1999 would be much more effective after spending a few months learning about things that Eliezer 2008 pointed him to.

This means that getting big projects done will require significant numbers of full-time employees. Full-time employees, to be blunt, are as expensive as hell- both Google and Microsoft have around $500K of cash-on-hand per employee. Currently, both sides of the equation are lacking; there’s no significant pool of people to hire from, and there’s no infrastructure to hire into. Finding the former will probably be a lot easier than the latter, but both will undoubtedly require years of effort.