Here’s a writeup.
Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:
-What is the Singularity? -Is there a substantial chance we will significantly enhance human intelligence by 2050? -Is there a substantial chance we will create human-level AI before 2050? -If human-level AI is created, is there a good chance vastly superhuman AI will follow via an “intelligence explosion”? -Is acceleration of technological trends required for a Singularity? – Moore’s Law (hardware trajectories), AI research progressing faster? -What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??) -Does AI need to be conscious or have human like “intentionality” in order to achieve a Singularity? -What are the potential benefits and risks of the Singularity?