J. Storrs Hall at the Foresight Institute has responded to my recent post about the challenges of self-replication. Specifically, the line where I refer to the Foresight Institute and the Center for Responsible Nanotechnology:
What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster.
Dr. Hall responds:
From this he jumps with very few intervening arguments (â€there are terrorists out thereâ€) to a conclusion that we need a benevolent world dictatorship (â€singletonâ€), which might need to be a superhuman self-improving AI. This seems a wildly illogical leap, but surprisingly appears to be almost an article of faith in certain parts of the singularitarian community and Washington, DC. Let us examine the usually unstated assumptions behind it:
A singleton need not be a benevolent world dictatorship — just a “world order in which there is a single decision-making agency at the highest level”, as defined by …