DISCLAIMER: I am a great admirer of, and a donor to, both the Lifeboat Foundation and the Singularity Institute. I have nothing against either organization, or any of the people involved. However, I do feel that we need to implement a more effective overall policy.
Transhumanism, as it currently stands, is the ultimate adhocracy. There’s little central authority, and no overarching plan to bring the planet from where it stands now into a happy, utopian future. This is, perhaps, a good thing; we have all seen overly rigid, overly broad plans collapse when a Black Swan comes along. However, transhumanism cannot continue to operate in this manner, and simultaneously handle large ultratechnology projects.
Open-source software projects, which do manage to get things done, are an excellent example of why this type of system requires certain starting conditions. A typical open-source project has several full-time or highly dedicated developers, a larger group of people who make the occasional contribution, an even larger “following” of hundreds or thousands of people who participate in forums and such, and a userbase that can range from tens of thousands to millions. There is little formal hierarchy, and people can choose to work on whatever they want to. Many people were amazed when this structure became common during the 1990s, but we know from experience that it works.
However, open-source projects naturally fulfill a number of preconditions, none of which are currently met by transhumanism. To name a few of the more important ones:
1). Any given open-source project has a single, clearly defined database of code that everyone contributes to. At most, there may be several alternative versions, or a “stable” and a “beta” version. For the most part, everyone has the same knowledge base. This does not apply to transhumanism, or to any of its variants. There’s no central database of knowledge, no great repository of wisdom which aspiring students can learn from, no great shelf of manuscripts which have been vetted and checked for accuracy and completeness. The end result is that people are still arguing about things which were effectively resolved back in 2002.
Eli’s Overcoming Bias posts contain relatively little unpublished material; what makes them remarkable is their thorough systemizing of bits and pieces of disjointed knowledge. While I still dispute their effectiveness as a recruiting tool, they should be extremely effective in bringing everyone up to the same basic knowledge level. We definitely need to do more on this front; SIAI has an opening for a Senior Science Writer, but this is a full-time position, so the qualifications required are rather hefty. In the meantime, I suggest that SIAI or someone else hire transhumanist freelance bloggers on a part-time or volunteer basis, to write about already-researched material which needs to be formalized and systemized. Many such bloggers are quite articulate and knowledgeable about the subject matter, and until we can find someone to do this full-time, it seems like the only alternative to further tying up Eliezer. Due to unavoidable external circumstances, I am unable to participate in such an effort, at least for the time being.
As a further interim measure, there are already many informal collections of transhumanist-themed essays, scattered around the Internet. It would be fairly easy to, with the author’s permission, download these, and then store them on a central server for public access. I have done this myself for archival purposes, in case of nuclear war or another Internet-destroying catastrophe, but I lack the time required to make such a database up-to-date and easily accessible.
2). Open-source projects are not usually vulnerable to sudden, nonrecoverable catastrophe caused by maliciousness or incompetence. The standard open-source security policy is “with enough eyes, all bugs are shallow”; i.e., if there is a security hole, people will see it and fix it if they’re allowed to look at the code. This works well enough for protecting user’s PCs; it will not work with ultratechnology, where a single slip can cause global catastrophe.
Luckily, the military has already invented a reasonable system for handling secret information: keep it locked up in a vault somewhere, and only admit people who have proven themselves trustworthy. Anything important enough to be kept hush-hush should be formally stamped “SECRET” and thrown in an RSA 4096-bit-encrypted database. The alternative is leaking bits and pieces of supposedly secret information all over the place. Human brains simply aren’t reliable enough to keep track of ten thousand bits of secret information at once, and people who attempt to do so (e.g., spies) are known to have high failure rates.
3). Any large-scale open source project will have some form of TODO list, with high-priority, medium-priority and low-priority tasks. Identifying which tasks need the most attention is difficult, but it’s a lot better than releasing version 1.2 with a bug that overwrites the hard drive. Existential risk, which is far more serious than most PC data, is currently handled very poorly by comparison. Among intellectuals, the same amount of attention is commonly given to asteroid impacts and nanotechnological disasters, despite the six-orders-of-magnitude-plus difference in probability; in the popular media, the disparity is even more extreme. I have already written about this subject, and I plan to revisit the area more formally when I have more time available.
4). Open-source projects are based off of computer programming languages, which anyone can learn about fairly easily. There are thousands of professional programmers in the US alone, and amateur programmers probably outnumber them many times over. By contrast, a random Joe SL1 or Jimmy SL2 would need to spend years covering large inferential distances before publishing original research papers. This goes for everyone, no matter how intelligent; I’m quite confident that Eliezer 1999 would be much more effective after spending a few months learning about things that Eliezer 2008 pointed him to.
This means that getting big projects done will require significant numbers of full-time employees. Full-time employees, to be blunt, are as expensive as hell- both Google and Microsoft have around $500K of cash-on-hand per employee. Currently, both sides of the equation are lacking; there’s no significant pool of people to hire from, and there’s no infrastructure to hire into. Finding the former will probably be a lot easier than the latter, but both will undoubtedly require years of effort.