This Universcale flash app is really impressive. I found the most interesting part around the micro/nanoscale. It includes data points on the very smallest electronics as well as organic molecules.
It was proposed recently that dark energy is just an illusion, caused by the relative difference in collapse speed of matter-dense areas of space relative to the voids. If this is true then it would be quite a fascinating discovery, letting us say that we actually understand 70% of the mass-energy of the universe. The remaining portion to explain would be dark matter. Despite their misleadingly similar names, the only thing that dark matter and dark energy have in common is that we don't know where they come from. Both could be mere artifacts of our interpretations.
1. The Asimov Laws comment:
While pouring over code for days, lets hope they remember to put in the 3 laws of Robotics.
2. The "I'm worried because of movies" comment:
This shit is scaring me. In every movie involving AI the human race has struggled against robots, computers, or whatever you'd like to call them. If you let AI have physical responsibilities and give it the ability to learn it's only natural that they will evolve and decide to kill humans. Computers can evolve faster than humans and it is almost certain as demonstrated by evolution that they will want to destroy us. There are mutualistic relationships in the natural world, but I personally don't think computers will want us to live like we are right now.
I know some of you will laugh at this, but this is not a joke to me and you should wake up and smell the coffee. If AI is developed it should never be given the right to develop itself physically without giving it restraints that leave the computer unable to expand past a certain point.
Both of these comments are what you get from the average person, and as with many average-level thoughts on difficult topics, they're superficial and unconstructive. Asimov's laws wouldn't work. Negative commands ("don't do this") are useless in comparison to positive commands ("do this"). Unless what you want a robot or AI to do is entirely implicit in the positive commands, the goal structure is unlikely to be self-consistent. Asimov's laws were a plot device invented half a century ago. We aren't going to get anywhere if we keep pretending that they would actually help or are a legitimate way of thinking about AI ethics.
It's smart to be concerned about the future of AI, and "wake up and smell the coffee", with regards to the fact that we aren't going to be the only intelligent species on this planet for much longer. Many transhumanists need to do this. However, saying "it's only natural that they will evolve and decide to kill humans" is the classic boring anthropomorphism that kills all serious discussions of AI ethics before they can even get started. It's like trying to do math without having any coherent concept of number. Humans need to realize that everything we consider "natural" and "normal" about certain psychological patterns is entirely contingent on our historical experiences in a pin-sized corner of the totality of mindspace. There is no automatic connection between intelligence level and goal content except insofar as they sometimes come from the same underlying causal process (in our case, evolution), so to say, "once AIs surpass us in intelligence, they'll want to kill us" is ridiculous Darwinomorphism. By Darwinomorphism, I mean unfoundedly assuming that a intelligently programmed intelligence is going to share the same psychological features that are common to all minds shaped by Darwinian evolution.
Anyone who holds either of these two beliefs - that Asimov's laws are a decent idea, or that AIs will behave in a certain anthropomorphic way - is essentially sending a signal that they can't contribute to the serious discussion of "what dynamic goals do we give the first AI, and what structure should be implementing those goals?" At present, it seems like the community who can discuss these issues is only around 100 people, which is unfortunate, because the clock is ticking and having several thousand would be much more preferable.