IBM Cat Brain Nonsense in the Zeitgeist

I found another ridiculous article on IBM’s so-called “cat brain” at TechWorldNews, titled “IBM Researchers Go Way Beyond AI With Cat-Like Cognitive Computing”. I run into these articles all the time doing AI-related searches, so even though they were published a year ago, their deception remains strongly in effect. The fact that so many people actually believe what IBM implies shows how fundamentally confused 99% of the population (including geeks) is about AI in general. Here’s a quote from the article:

IBM researchers have developed a cognitive computer simulation that mimics the way a cat brain processes thought, and they expect to be able to mimic human thought processes within a decade. “A cognitive computer could quickly and accurately put together the disparate pieces of any complex data puzzle and help people make good decisions rapidly,” said Daniel Kantor, medical director of Neurologique.

Mimics the way a cat brain processes thought. They actually wrote that. So people believe in a computer that processes cat thought existing in 2009, but don’t expect a computer that mimics human …

Read More

New Singularity Institute Publications in 2010

Here’s the source.

Basic AI Drives and Catastophic Risks (Carl Shulman, 2010) Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics (Nick Tarleton, 2010) Economic Implications of Software Minds (S. Kaas, S. Rayhawk, A. Salamon and P. Salamon, 2010) From mostly harmless to civilization-threatening: pathways to dangerous artificial general intelligences (Kaj Sotala, 2010) Implications of a software‐limited singularity (Carl Shulman, Anders Sandberg, 2010) Superintelligence does not imply benevolence (Joshua Fox, Carl Shulman, 2010) Timeless Decision Theory (Eliezer Yudkowsky, 2010)

The above are papers, below are presentations:

How intelligible is intelligence? (Anna Salamon, Stephen Rayhawk, Janos Kramar, 2010) Whole Brain Emulation and the Evolution of Superorganisms (Carl Shulman, 2010) What can evolution tell us about the feasibility of artificial intelligence? (Carl Shulman, 2010)

If you value this research, donate to the Singularity Institute via Paypal, and your donation will be matched. At Less Wrong, various users are announcing the level of their contributions. The user …

Read More

Marvin Minsky Quote on Randomness in AI

I found this on Marvin Minsky’s Wikipedia page:

Minsky is an actor in an artificial intelligence koan (attributed to his student, Danny Hillis) from the Jargon file:

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?” asked Minsky. “I am training a randomly wired neural net to play Tic-tac-toe,” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play,” Sussman said. Minsky then shut his eyes. “Why do you close your eyes?” Sussman asked his teacher. “So that the room will be empty.” At that moment, Sussman was enlightened.

What I actually said was, “If you wire it randomly, it will still have preconceptions of how to play. But you just won’t know what those preconceptions are.” –Marvin Minsky

I’m actually sort of pleased that so many folks in Artificial Intelligence somehow believe in the power of total randomness. It will hold them back from success, giving more time for …

Read More

Josh Tenenbaum: Bayesian Models of Human Inductive Learning

Here’s the link. Abstract:

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations — far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people’s everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called “intuitive theories” or “schemas”. For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases — critical for generalization from very few examples — with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have …

Read More

Starcraft AI Competition Results Posted

Starcraft AI is jacked up and good to go! Last month UC Santa Cruz’s Expressive Intelligence Studio (“exploring the intersection of artificial intelligence, art, and design”) held the Starcraft AI Competition, where bots were pitted against each other and human players. 29 teams submitted a bot. They played with the Brood War add-on.

This competition is interesting to me because 1) Starcraft: Brood War is my all-time favorite multi-player game, 2) it’s many times more complicated than chess or Go, 3) the game requires realtime decision-making skills, 4) the best known strategies are highly complex, involving extensive micromanagement of individual units. Some professional Starcraft tournament players input hundreds of commands per minute. A medium-level player like myself probably inputs 20-30 moves per minute as the game starts to pick up. One way to win the game easily against novices is to take the optimal route to mass-producing the cheapest unit (this is known), then rushing the enemy base. This can only be done with extensive micromanagement, but once you know …

Read More

Skype Co-Founder: “We Need to Ensure That a Self-Correcting System Will Stay True to its Initial Purpose”

A Singularity Institute donor and Singularity Summit sponsor, Skype co-founder Jaan Tallinn understands the risk of advanced artificial intelligence. Estonian Public Broadcasting recently covered his remarks on the topic:

Jaan Tallinn, one of the founders of Skype, believes humans may succeed in creating artificial intelligence by midcentury.

Tallinn told uudised.err.ee that in order to create artificial intelligence, two important problems need to be solved. “First, we need to ensure that a self-correcting system will stay true to its initial purpose. Secondly, we need to solve a more difficult problem — to determine what we actually want. What are those initial goals for a computer that is given super intelligence?” Tallinn asked.

He added that there could be negative outcomes if artificial intelligence is more powerful than humans but cannot interpret human values. “If a computer needs to get carbon atoms, and it doesn’t care about humans, then it would think the easiest place to get them is from humans. It would be more difficult to acquire them from the air,” said Tallinn.

It is hard to …

Read More

Stephen Omohundro: The Basic AI Drives

More info on Stephen, thank you commenter Bettina:

Stephen Omohundro http://selfawaresystems.com/

Via Wikipedia:

He graduated from Stanford University with degrees in Physics and Mathematics. He received a Ph.D. in Physics from the University of California, Berkeley and published the book Geometric Perturbation Theory in Physics based on his thesis.

At Thinking Machines Corporation, he developed Star Lisp, the first programming language for the Connection Machine, with Cliff Lasser. From 1986 to 1988, he was an Assistant Professor of Computer science at the University of Illinois at Urbana-Champaign and cofounder of the Center for Complex Systems Research. He subsequently joined the International Computer Science Institute (ICSI) in Berkeley, California, where he led the development of the object-oriented programming language Sather in 1990 and developed novel neural network and machine learning algorithms. He subsequently was a Research scientist at the NEC Research Institute, working on machine learning and computer vision, and was a co-inventor of U.S. Patent 5,696,964, “Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution That Each Item in the Database is a Target of …

Read More

io9 Continues to Perpetuate Ridiculous “IBM Simulated a Cat Brain” Meme

In a recent post at io9, Esther Inglis-Arkell perpetuates the stupid claim that IBM successfully simulated a cat cortex in a computer, which the site made right after it happened. Doesn’t anyone consider it odd that we have supposedly simulated a cat’s brain, but full-resolution simulations of the brains of lower animals, including insects, are nowhere to be found? There isn’t even a simulation of a flatworm that displays behavioral isomorphism to a real flatworm. Behavioral isomorphism is something we would expect from a real simulation.

That the writers and editors of io9 don’t even question this news item shows that their knowledge of the technology they write about is very poor. This is what happens when you focus too hard on pop culture — there’s no time for real science reading. The end result is poor coverage and the perpetuation of obviously false memes. Perhaps io9 should stick to covering sketches of Wookies and UFOs, and leave science/AI reporting to others.

Shortly after IBM’s announcement, computational neuroscientist Henry Markram at EFPL’s Blue Brain project called …

Read More

Jaron Lanier: the End of Human Specialness

Lanier’s latest eye-roller is up at The Chronicle of Higher Education.

Decay in the belief in self is driven not by technology, but by the culture of technologists, especially the recent designs of antihuman software like Facebook, which almost everyone is suddenly living their lives through. Such designs suggest that information is a free-standing substance, independent of human experience or perspective. As a result, the role of each human shifts from being a “special” entity to being a component of an emerging global computer.

Uh, OK. I agree in some sense… on Facebook, I’ve said in response to David Pearce that the site “makes us more trivial people than ever” and shortens our attention spans. I often find myself agreeing with “Luddite” Andrew Keen, who is unfairly put down by open-everything fanatic and geek darling Larry Lessig. Even from this natural “Luddite” perspective that I hold, Lanier’s article still seems odd.

Facebook does have the potential to enrich lives and humanness rather than turn everything into information, when it is used in moderation. If you know …

Read More