Getting anything done in this world, almost anything at all, is very expensive. Either you have a venture that makes more money than it consumes (a business), a volunteer effort (often including personnel lacking in motivation, time, resources, and/or experience), a university (funded by tuition and endowment payments) or that elusive beast, the non-profit, funded by individual and corporate giving.
Usually, the free market does a pretty good job of motivating people to do stuff that other people want. For instance, I can analyze some emerging technology like holographic displays for a company interested in the field, and get some money out of the deal. Maybe that sort of research can get a little boring, but they want it, so they pay me, right?
But the free market economy sometimes fails, or leads to suboptimal outcomes. It satisfies our vices just as much as our virtues. The market for alcohol, tobacco, prostitution and gambling is enormous. Addiction to these vices costs our society billions of dollars a year in treatment and lost potential. The actuaries can give it a number, but when you get down to it, these losses are unquantifiable. Unfortunately, governmental nanny attempts at setting things right often blow up in our face. Hence libertarians advocating a completely hands-off approach. The whole situation is a morass.
Just like the free market sometimes encourages the generation and purchase of substances and lifestyles that don't represent the best of human activity, the free market can inflate the price of essential products or services when there is economic incentive to do so (or not). People in South America (and elsewhere) are suffering because brand-name drugs are being sold for sky-high prices and the government is pushing to revoke patents and allow the production of generic versions of these drugs. Tragedies of the Commons occur on a regular basis, to a greater or lesser degree. The emission of greenhouse gases, for instance. Some of these suboptimal configurations are being curtailed by legislative action, but as mentioned above, the nanny approach often fails.
The Tragedy of the Commons that concerns me most is that of international security. Each nation has an incentive to promote international security only insofar as it protects that nation, and to a lesser degree, its allies. Even moreso, each nation has an incentive to develop powerful weapons and large armies to increase its bargaining power in international politics. Even though it isn't widely publicized, we are in the largest arms race in the history of humanity right now, in 2007. Rudy Guliani has suggested doubling the size of the United States military, and if he gets elected, he could actually take steps towards achieving that. Over $1 trillion is spent on militaries worldwide, half of it in the United States. Even if your job is sweeping a floor, part of your salary goes to building killing machines.
Contemplating how many resources should be diverted to the military is a Catch 22. On the one hand, a nation should have a large and strong military so that it is taken seriously in bargaining for international treaties and unspoken rules that promote peace. In a very real sense, power can be used for peace, and to deny it is naive. On the other hand, building up the military forces other countries to follow suit, leading to an arms race from which there is no escape. It's like Lewis Carroll's Red Queen -- we have to keep running just to stay in the same place.
As we inch forward into the 21st century, I and many others foresee new weapons of mass destruction becoming available, weapons that can annihilate every living thing on this planet in weeks or even days. The everyday person has little interest or concern in such eventualities, but the cognoscenti (or should I say Illuminati?) of academia, business, and government are not idiots, and they see what's coming. If you want to learn more, read "Some Limits to Global Ecophagy" by Robert Freitas.
We are rushing towards an arms race of much greater magnitude than the Cold War. Rather than dumb nukes which are delivered via huge, conspicuous ICBMs, we will have massive clouds of robotic drones which creep, crawl, bounce, slither, and fly into every nook and cranny of the enemy's hardware or even bodies, capable of dismantling them from within. We are already seeing the beginning of it today. Look at the success of the Predator UAV and minesweeping drones. In a couple decades, soldiers will be more like mages, directing and controlling lethal swarm intelligence, than riflemen, using a souped-up slingshot to direct hunks of metal in a straight line.
The danger begins when we hand over more responsibilities and control to the swarm intelligence. Humans are dangerous and sadistic enough, but there is reason to believe that true artificial intelligence will have trouble sympathizing with humans or grasping our moral norms without large amounts of special (expensive and difficult) programming. Moral acumen is not a side dish that comes free of charge along with the main meal of intelligence. Our distinct morality was produced by millions of years of evolution in social groups, and contains numerous complex elements, some poorly understood: reading the facial cues of others, modeling others circumstances and state of mind, projecting others current and future intentions contingent on different hypothetical actions, vague philosophical questions like "what does a typical human really want?", and so on. For military robotics, these complexities are quite beside the point: won't they all be directed by intelligent operators? But assuming that humans will be on top forever (intellectually or physically) is a recipe for disaster.
The free market economy is the dominant force in the world. The world's governments, the wealthiest existing entities, want better weapons to give themselves weight in international relations. Without international arms reduction treaties, which can be quite difficult to enact, the arms race persists indefinitely. Even if I have a button that can destroy the opposing country with a single press, what if they destroy that facility before I have a chance to push it? So I must build multiple such facilities, ad infinitum. The only way to avoid slamming into the wall is to put on the breaks ahead of time.
We need more international cooperation, more global unity, but most of all, technologies that are inherently protective rather than aggressive. So that if even humans are human, things still turn out alright in the end. Friendly artificial intelligence would be the most powerful tool in this category. If we could create AI that is verifiably friendly, not just in some abstract technical way but in ways that are clear as day, even to a child, then we've hit the technological jackpot. The Friendly AI could then invent and apply additional protective technologies for our benefit. We would have little to fear from such an AI morphing out of control, because, hey, it cares about its own integrity even more than we do or are capable of.
The question is, how to build a Friendly AI? Besides the huge challenge of developing an artificial intelligence to begin with, there is the additional challenge of "what is friendly supposed to mean?" At this point, I often defer to Nick Bostrom's Maxipok principle: rather than arguing forever on the specifics, I suggest that we attempt to maximize the probability of an OK outcome for everyone. A model that has been suggested before for Friendly AI is that of a prototypical altruist, with certain changes, like the absence of a self-centered goal system. If such an AI were programmed correctly, it would not consider "self" as a moral entity worthy of special treatment, and would truly be concerned with the good of all.
Several other models have been proposed for creating Friendly AI, but delving into them is a contentious and philosophically complex project. Before getting distracted by the difficulties of creating such AI, it's worth acknowledging in the abstract whether such AI is worth developing. I definitely believe so. Humans are inherently self-interested -- wouldn't it be nice if all that selfishness could be diluted with agents who are truly altruistic and care about the human race? Then we could possibly avoid shooting ourselves in the foot as this Dangerous Century proceeds. And our ancestors and future selves will thank us for it.