Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

11Jan/113

IBM Cat Brain Nonsense in the Zeitgeist

I found another ridiculous article on IBM's so-called "cat brain" at TechWorldNews, titled "IBM Researchers Go Way Beyond AI With Cat-Like Cognitive Computing". I run into these articles all the time doing AI-related searches, so even though they were published a year ago, their deception remains strongly in effect. The fact that so many people actually believe what IBM implies shows how fundamentally confused 99% of the population (including geeks) is about AI in general. Here's a quote from the article:

IBM researchers have developed a cognitive computer simulation that mimics the way a cat brain processes thought, and they expect to be able to mimic human thought processes within a decade. "A cognitive computer could quickly and accurately put together the disparate pieces of any complex data puzzle and help people make good decisions rapidly," said Daniel Kantor, medical director of Neurologique.

Mimics the way a cat brain processes thought. They actually wrote that. So people believe in a computer that processes cat thought existing in 2009, but don't expect a computer that mimics human thought for hundreds of years or ever? People really do believe this (I probably did at one point long ago), because they were brought up on bizarre Judeo-Christian ideas which involve elevating human thought to a supernatural status which cannot be replicated in a computer. It's entirely unscientific, but even many so-called "secular humanists" believe in mystical human exceptionalism. "We're nowhere close to understanding the brain", they claim, despite thousands of detailed textbooks and hundreds of thousands of articles on the brain and mind.

It's true that we're nowhere near to understanding all the microcircuitry of the brain, but we have to distinguish between functionally relevant cognitive complexity and incidental cognitive complexity. Most of the complexity in a bird is incidental to the bird, not fundamentally necessary for flight. It may be possible to create AGI without understanding much about the human brain at all.

Filed under: AI 3 Comments
28Dec/1021

New Singularity Institute Publications in 2010

Here's the source.

Basic AI Drives and Catastophic Risks (Carl Shulman, 2010)
Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics (Nick Tarleton, 2010)
Economic Implications of Software Minds (S. Kaas, S. Rayhawk, A. Salamon and P. Salamon, 2010)
From mostly harmless to civilization-threatening: pathways to dangerous artificial general intelligences (Kaj Sotala, 2010)
Implications of a software‐limited singularity (Carl Shulman, Anders Sandberg, 2010)
Superintelligence does not imply benevolence (Joshua Fox, Carl Shulman, 2010)
Timeless Decision Theory (Eliezer Yudkowsky, 2010)

The above are papers, below are presentations:

How intelligible is intelligence? (Anna Salamon, Stephen Rayhawk, Janos Kramar, 2010)
Whole Brain Emulation and the Evolution of Superorganisms (Carl Shulman, 2010)
What can evolution tell us about the feasibility of artificial intelligence? (Carl Shulman, 2010)

If you value this research, donate to the Singularity Institute via Paypal, and your donation will be matched. At Less Wrong, various users are announcing the level of their contributions. The user "Rain", who donated $2,700, made a comment at the site about why he donates to SIAI.

18Dec/1016

Marvin Minsky Quote on Randomness in AI

I found this on Marvin Minsky's Wikipedia page:

Minsky is an actor in an artificial intelligence koan (attributed to his student, Danny Hillis) from the Jargon file:

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
"What are you doing?" asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe," Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play," Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.

What I actually said was, "If you wire it randomly, it will still have preconceptions of how to play. But you just won't know what those preconceptions are." --Marvin Minsky

I'm actually sort of pleased that so many folks in Artificial Intelligence somehow believe in the power of total randomness. It will hold them back from success, giving more time for Friendly AI to be developed properly and cautiously.

Filed under: AI 16 Comments
16Dec/103

Singularity Summit 2010 Videos: Shane Legg on Universal Measures of Intelligence

Shane Legg at The Singularity Summit 2010 -- Universal measures of intelligence from Singularity Institute on Vimeo.

This was my favorite talk of the Summit.

Shakey camera... :( Next Summit I'll have to watch the live video feed the entire time to make sure it stays steady.

Filed under: AI, singularity, videos 3 Comments
2Dec/103

Josh Tenenbaum: Bayesian Models of Human Inductive Learning

Here's the link. Abstract:

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations -- far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people's everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called "intuitive theories" or "schemas". For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases -- critical for generalization from very few examples -- with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have been pre-programmed to perform in. The models I discuss will connect to several directions in contemporary machine learning, such as semi-supervised learning, structure learning in graphical models, hierarchical Bayesian modeling, and nonparametric Bayes.

Filed under: AI, science, videos 3 Comments
24Nov/102

AGI-2010 Videos Online

Here's the lot of them. Speakers include Marcus Hutter, Ben Goertzel, Moshe Looks, Randal Koene, and many new faces. I'm starting with Randal Koene on "Neural Mechanisms in Reinforcement Learning":

Randal A. Koene - Neural Mechanisms of Reinforcement Learning from Raj Dye on Vimeo.

Filed under: AI, videos 2 Comments
11Nov/1011

Starcraft AI Competition Results Posted

Starcraft AI is jacked up and good to go! Last month UC Santa Cruz's Expressive Intelligence Studio ("exploring the intersection of artificial intelligence, art, and design") held the Starcraft AI Competition, where bots were pitted against each other and human players. 29 teams submitted a bot. They played with the Brood War add-on.

This competition is interesting to me because 1) Starcraft: Brood War is my all-time favorite multi-player game, 2) it's many times more complicated than chess or Go, 3) the game requires realtime decision-making skills, 4) the best known strategies are highly complex, involving extensive micromanagement of individual units. Some professional Starcraft tournament players input hundreds of commands per minute. A medium-level player like myself probably inputs 20-30 moves per minute as the game starts to pick up. One way to win the game easily against novices is to take the optimal route to mass-producing the cheapest unit (this is known), then rushing the enemy base. This can only be done with extensive micromanagement, but once you know how to do it, it is easy. What I'm most interested in is not AI that can achieve that (the AI that comes with the game can), but AI that can do better in later gameplay, where dozens of different units with unique abilities are introduced and there is a combinatorial explosion of possibilities.

In the end, the champion human player, =DoGo=, a World Cyber Games 2001 competitor, beat all the AIs, but apparently it was close. This is impressive because an AI can simultaneously "pay attention" to all units on the map at once while a human player has to remember everything and can ultimately only focus on one unit at a time, even if it's only for a fraction of a second. Kotaku Australia has coverage of the event, but the real coverage is at the EIS blog, which has detailed technical results on all the matches.

Filed under: AI 11 Comments
24Oct/1045

Skype Co-Founder: “We Need to Ensure That a Self-Correcting System Will Stay True to its Initial Purpose”

A Singularity Institute donor and Singularity Summit sponsor, Skype co-founder Jaan Tallinn understands the risk of advanced artificial intelligence. Estonian Public Broadcasting recently covered his remarks on the topic:

Jaan Tallinn, one of the founders of Skype, believes humans may succeed in creating artificial intelligence by midcentury.

Tallinn told uudised.err.ee that in order to create artificial intelligence, two important problems need to be solved. "First, we need to ensure that a self-correcting system will stay true to its initial purpose. Secondly, we need to solve a more difficult problem -- to determine what we actually want. What are those initial goals for a computer that is given super intelligence?" Tallinn asked.

He added that there could be negative outcomes if artificial intelligence is more powerful than humans but cannot interpret human values. "If a computer needs to get carbon atoms, and it doesn't care about humans, then it would think the easiest place to get them is from humans. It would be more difficult to acquire them from the air," said Tallinn.

It is hard to say what qualifies as artificial intelligence, said Enn Tõugu, senior researcher of the Cybernetics Institute at the Tallinn University of Technology. "I can't really even tell you what exactly is intelligence, intellect, reason or knowledge," he said.

"I tend to think that we can talk about intelligence as a human quality that computers can possibly attain. To some degree, it already is so. For example, I see such beginnings in Google," Tõugu said.

To modify the above slightly, to achieve artificial intelligence, those two problems should be solved -- they won't necessarily be. Creating artificial intelligence without making a self-modifying system stable or accurately specifying what we want could be a species-ending disaster, but entirely possible. In fact, economic pressures may make it more likely than the alternative -- Friendly AI.

Filed under: AI, friendly ai 45 Comments
24Sep/102

Stephen Omohundro: The Basic AI Drives

More info on Stephen, thank you commenter Bettina:

Stephen Omohundro http://selfawaresystems.com/

Via Wikipedia:

He graduated from Stanford University with degrees in Physics and Mathematics. He received a Ph.D. in Physics from the University of California, Berkeley and published the book Geometric Perturbation Theory in Physics based on his thesis.

At Thinking Machines Corporation, he developed Star Lisp, the first programming language for the Connection Machine, with Cliff Lasser. From 1986 to 1988, he was an Assistant Professor of Computer science at the University of Illinois at Urbana-Champaign and cofounder of the Center for Complex Systems Research.
He subsequently joined the International Computer Science Institute (ICSI) in Berkeley, California, where he led the development of the object-oriented programming language Sather in 1990 and developed novel neural network and machine learning algorithms. He subsequently was a Research scientist at the NEC Research Institute, working on machine learning and computer vision, and was a co-inventor of U.S. Patent 5,696,964, “Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution That Each Item in the Database is a Target of a Search”

He then started the consultancy OLO Software, and is now President of Self-Aware Systems in Palo Alto, California. He has been an advisor to the Singularity Institute for Artificial Intelligence since April 2007.

Filed under: AI, friendly ai, videos 2 Comments
23Sep/107

io9 Continues to Perpetuate Ridiculous “IBM Simulated a Cat Brain” Meme

In a recent post at io9, Esther Inglis-Arkell perpetuates the stupid claim that IBM successfully simulated a cat cortex in a computer, which the site made right after it happened. Doesn't anyone consider it odd that we have supposedly simulated a cat's brain, but full-resolution simulations of the brains of lower animals, including insects, are nowhere to be found? There isn't even a simulation of a flatworm that displays behavioral isomorphism to a real flatworm. Behavioral isomorphism is something we would expect from a real simulation.

That the writers and editors of io9 don't even question this news item shows that their knowledge of the technology they write about is very poor. This is what happens when you focus too hard on pop culture -- there's no time for real science reading. The end result is poor coverage and the perpetuation of obviously false memes. Perhaps io9 should stick to covering sketches of Wookies and UFOs, and leave science/AI reporting to others.

Shortly after IBM's announcement, computational neuroscientist Henry Markram at EFPL's Blue Brain project called the announcement a "hoax", which I covered in November 2009. I'm not sure whether I would call it a "hoax" myself, but I do think that Dharmenda Modhi's website and announcement were deliberately created to mislead the media into thinking his team had simulated a cat brain, rather than merely creating a cat-SCALE simulation of point neurons without even one thousandth the complexity of real neurons. The key science result was the observation of oscillations in the neural network, which is trivial. It is presented as profound. What is also not mentioned is that Prof. Eugene Izhikevich has already created similar models with as many as a hundred billion point neurons, therefore "human-scale" neural net simulations already exist. Such investigation is apparently beyond the capabilities of the teams running the Internet's pop-sci blogging community.

Read Henry Markram's comments on why IBM's announcement is profoundly misleading.

Filed under: AI 7 Comments
15Sep/101

Michael Anissimov: “Don’t Fear the Singularity, but Be Careful: Friendly AI Design” at Foresight 2010 Conference

Michael Anissimov: "Don't Fear the Singularity, but Be Careful: Friendly AI Design" at Foresight 2010 Conference from Foresight Institute on Vimeo.

Filed under: AI, videos 1 Comment
7Sep/1018

Jaron Lanier: the End of Human Specialness

Lanier's latest eye-roller is up at The Chronicle of Higher Education.

Decay in the belief in self is driven not by technology, but by the culture of technologists, especially the recent designs of antihuman software like Facebook, which almost everyone is suddenly living their lives through. Such designs suggest that information is a free-standing substance, independent of human experience or perspective. As a result, the role of each human shifts from being a "special" entity to being a component of an emerging global computer.

Uh, OK. I agree in some sense... on Facebook, I've said in response to David Pearce that the site "makes us more trivial people than ever" and shortens our attention spans. I often find myself agreeing with "Luddite" Andrew Keen, who is unfairly put down by open-everything fanatic and geek darling Larry Lessig. Even from this natural "Luddite" perspective that I hold, Lanier's article still seems odd.

Facebook does have the potential to enrich lives and humanness rather than turn everything into information, when it is used in moderation. If you know any teenagers, you can see how they easily and seamlessly integrate online messaging with real world mutual interest and even obsession. If anything, technology enables a kind of hyper-sociality for them that makes most people over 35 uncomfortable.

Also, it's different when you're part of the club versus outside it. I have noticed a syndrome whereby famous people tend to shy away from Facebook, as if it is too plebian for their tastes. They never even really try it. As far as I can tell from simple searches, Lanier is too cool to have a Facebook at all.

Even Andrew Keen has a Facebook page, with the humorous tagline "the anti-christ of Silicon Valley".

Lanier writes:

This shift has palpable consequences. For one thing, power accrues to the proprietors of the central nodes on the global computer. There are various types of central nodes, including the servers of Silicon Valley companies devoted to searching or social-networking, computers that empower impenetrable high finance (like hedge funds and high-frequency trading), and state-security computers. Those who are not themselves close to a central node find their own cognition gradually turning into a commodity. Someone who used to be able to sell commercial illustrations now must give them away, for instance, so that a third party can make money from advertising. Students turn to Wikipedia, and often don't notice that the acceptance of a single, collective version of reality has the effect of eroding their personhood.

Wikipedia has some problems, but by and large, it massively increases knowledge. If I smashed every Americans' television and made them read Wikipedia in the time they spent watching TV or movies, people in bars and on the street would be a lot less boring to talk to. Not everyone is as wealthy as Mr. Lanier and can buy as many books as they like. Still, there definitely and most obviously is a place for knowledge outside of Wikipedia. Wikipedia, when used as a starting point and not the final word, is a fantastic tool. Just because some people lazily use it as the final word does not mean that it is universally bad. The same people would use dead tree encyclopedias as the final word, anyway.

This shift in human culture is borne by software designs, and is driven by a new sort of "nerd" religion based around a core belief that a global brain is not only emerging but will replace humanity. It is often claimed, in the vicinity of institutions like Silicon Valley's Singularity University, that the giant global computer will upload the contents of human brains to grant them everlasting life in the computing cloud.

Interestingly, I may be part of the "nerd religion" Lanier is describing, if the religion consists of feeling that human-friendly Artificial General Intelligence could do a tremendous amount of good in the world and is worth pursuing vigorously. However, I consider talk of global brains to essentially be nonsense. A choir is only as good as its worst member, and human cognition and organizations are constrained by similar rules. No single unit of contribution to any project can be greater or better than the brilliance of the smartest human, and the only reason we're so oblivious to this is because humans are the only general intelligences that we are evolved to model and think about. We also don't like to think any thoughts that make ourselves and our society seem any less than awesome.

The problem is, social feelings create such positive affect, it makes us want to ignore the simple truth that a group of humans is just that -- a group of humans -- and not a superintelligence as defined by Bostrom or Vinge.

Still, I do think it would be cool to be an upload in some kind of computing cloud, so maybe there is a connection here.

There is right now a lot of talk about whether to believe in God or not, but I suspect that religious arguments are gradually incorporating coded debates about whether to even believe in people anymore.

Maybe this signifies movement towards non-anthropocentric theories of personhood and ethics? If so, sounds swell to me.

Filed under: AI, transhumanism 18 Comments