There are 88 videos in the Accelerating Future video group on Vimeo. All videos are created by Jeriaska. The videos include substantial amounts of content from events including Foresight’s Unconference 2007, Aging 2008, AGI 08, AGI 09, Global Catastrophic Risks 08, and BIL. This Vimeo group is the best source for video content from those conferences, and will be updated with videos from future conferences.
Stanford Professor Emeritus Martin Hellman, a friend of SIAI and the Lifeboat Foundation, was recently featured in a press release by Stanford University, “Chance of nuclear war is greater than you think: Stanford engineer makes risk analysis”:
What are the chances of a nuclear world war? What is the risk of a nuclear attack on United States soil? The risk of a child born today suffering an early death due to nuclear war is at least 10 percent, according to Martin Hellman, a tall, thin and talkative Stanford Professor Emeritus in Engineering.
Nuclear tensions in Iran and North Korea are increasing the need to take a long look at how the United States handles weapons of mass destruction, Hellman said.
Auto manufacturers assess the risk of injury to drivers, and engineers assess potential risks of a new nuclear power plant. So why havenâ€™t we assessed the risk of nuclear conflict based on our current arms strategy? Hellman and a group of defense experts, Nobel laureates and Stanford professors are calling for an in-depth analysis.
You might have heard this already, but here is CNET’s coverage of Henry Markram’s talk. Their article is titled, “Artificial brain in 10 years, apocalypse soon after?”
I think Markram is wrong about being able to model an intelligent virtual human brain within 10 years. I can’t wait to see his talk online so I can see the details he presents for that projection (if any). Still, it seems plausible to me that whole brain emulation could be achieved within 20-30 years.
Brian Wang has some more background on the NYT article.
Here is the site for the panel they were talking about. Presidential Panel, sounds pretty fancy.
The people there look pretty old — I’m seeing a sea of grey. Nothing wrong with that, but I’m starting to notice an unusual pattern — younger people (under roughly 45) worry about an intelligence explosion, and older people worry less. Is it because younger people are irrational, or because older people have difficulty picking up new ideas? Vinge, Joy, and Kurzweil aren’t young. Either way, if younger people retain their concern for the intelligence explosion over time, that view will begin to dominate.
My suspicion is that most older people have grown up so long within the human order of things that they have great difficulty imagining a superintelligence showing up and rearranging everything. Notice I say MOST.
Following on the heels of a Singularity-related article from just a couple months ago, “The Coming Superbrain”, New York Times journalist John Markoff has penned another Singularity-related article, “Scientists Worry Machines May Outsmart Man”.
What is nice about these articles is that they are not solely about Kurzweil’s Singularity, and they actually branch out to consider issues of roboethics and the intelligence explosion.
The topic of this latest article is a conference convened at Asilomar by the American Association of Artificial Intelligence (AAAI) to discuss the dangers of advanced AI. What is interesting is that the conference seems to have been convened as a reaction to what our community has been up to over the past decade — agitating publicly about the dangers of advanced AI.
In describing the meeting, it begins like this:
The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.
Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems …
Here’s an interesting interview with Vladimir Vapnik, a leader in statistical learning theory, titled “Learning Has Just Started”. Vapnik is based at the Computational Learning Center at Royal Holloway, University of London. If you look over their personnel, you may also recognize Ray Solomonoff, creator of the concept of algorithmic probability.
H/t to Daniel Burfoot at Less Wrong.
The Large Hadron Collider operation was delayed again. Again!
The joke about this is that delays keep happening is because the LHD would kill us all if it worked and that it’s anthropically likely that we’d be born into a universe with a high population, one where human extinction keeps not happening for “mysterious” reasons. (To clarify, I lean towards thinking this is false.)
Can anyone say something about why they think the Doomsday Argument is false? I’ve read some rebuttals but found them unconvincing. I understand that all this stuff is fuzzy but I still haven’t been convinced out of it.
1. Feynman’s proposal to achieve molecular manufacturing. 2. A historical note on the idea : Heinlein’s fictional Waldoes. Waldoes in the story were (a) self-replicating (â€Reduplicatingâ€) and (b) scale-shifting (â€Pantographâ€). 3. Why hasnâ€™t the Feynman Path been attempted, or at least studied and analyzed? 4. The Feynman path involves more than MEMS 5. Is it worth starting now? 6. Some of the Open Questions 7. Outline of the steps to make a Feynman Path roadmap. 8. An example of prior work which suggests that 1/1000th scale is a good place to start on the Feynman Path. 9. Promising candidates technologies for fabricating key components or steps and considerations for the Feynman Path. 10. The Feynman Path initiative is a specific, concrete proposal
Jamais Cascio recently appeared on the History Channel’s program “That’s Impossible!” The episode, only the second in the show, was called “Real Terminators”. Here is a series of clips where he appears:
Great stuff, Jamais.
I missed this when it originally happened, but apparently the IEEE Spectrum was nominated as a National Magazine Award finalist for their June 2008 special issue on the Singularity, mentioned in a recent press release they sent out. This is more proof that writing about the Singularity is journalistic dynamite. The bandwagoning has already begun — the sooner you put out an article about the topic, the easier it will be to brag that you were on top of the story at a relatively early point.
The thing I like about the issue is that the people behind the special issue are very anti-Singularity, as confirmed by the “back story” article and subsequent blog posts on the IEEE blog, but they were forced to talk about it anyway, because it’s such a hot topic right now. You can tell an idea is doing well when people have no choice but to cover it.