Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

23Jun/1116

Two Approaches to AGI/AI

There are two general approaches to AGI/AI that I'd like to draw attention to, not "neat" and "scruffy", the standard division, but "brain inspired" and "not brain inspired".

Accomplishments of not brain inspired AI:

  • Wolfram Alpha (in my opinion the most interesting AI today)
  • spam filters
  • DARPA Grand Challenge victory (Stanley)
  • UAVs that fly themselves
  • clever game AI
  • AI that scans credit card records for fraud
  • the voice recognition AI that we all talk to on the phone
  • intelligence gathering AI
  • Watson and derivatives
  • Deep Blue
  • optical character recognition (OCR)
  • linguistic analysis AI
  • Google Translate
  • Google Search
  • text mining AI
  • OpenCog
  • AI-based computer aided design
  • the software that serves up user-specific Internet ads
  • pretty much everything

Accomplishments of brain-inspired AI:

  • Cortexia, a bio-inspired visual search engine
  • Numenta (no product yet)
  • Neural networks, which have proven highly limited
  • ???? (tell me below and I'll add them)

One place where brain-inspired AI always shows up is in science fiction. In the real world, AI has very little to do with copying neurobiology, and everything to do with abstract mathematics and coming up with algorithms that work for the job, regardless of their similarity to human cognitive processing.

Filed under: AI 16 Comments
23Jun/1110

Responding to Alex Knapp at Forbes

From Mr. Knapp's recent post:

If Stross’ objections turn out to be a problem in AI development, the “workaround” is to create generally intelligent AI that doesn’t depend on primate embodiment or adaptations. Couldn’t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

But Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I don’t mean on a different level — I mean actually different. On this point, I’d highly recommend reading Temple Grandin, who’s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

My first point is hardly magical thinking -- all of machine learning works to create learning systems that do not copy the animal learning process, which is only even known on a vague level. Does Knapp know anything about the way existing AI works? It's not based around trying to copy humans, but often around improving this abstract mathematical quality called inference. (Sometimes just around making a collection of heuristics and custom-built algorithms, but again that isn't copying humans.) Approximations Solomonoff induction works quite well on a variety of problems, regardless of the state of comparing human and machine intelligence. Many "AI would have to be exactly like humans to work, because humans are so awesome, so there" proponents, like Knapp and Stross, talk as if Solomonoff induction doesn't exist.

Answering how much or how little of the human brain is known is quite a subjective question. The MIT Encyclopedia of Cognitive Sciences is over 1,000 pages and full of information about how the brain works. The Bayesian Brain is another tome that discusses how the brain works, mathematically:

A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.

After an overview of the mathematical concepts, including Bayes' theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.

The fundamentals of how the brain works, as far as I see, are known, not unknown. We know that neurons fire in Bayesian patterns in response to external stimuli and internal connection weights. We know the brain is divided up into functional modules, and have a quite detailed understanding of certain modules, like the visual cortex. We know enough about the hippocampus in animals that scientists have recreated a part of it to restore rat memory.

Intelligence is a type of functionality, like the ability to take long jumps, but far more complicated. It's not mystically different than any other form of complex specialized behavior -- it's still based around noisy neural firing patterns in the brain. To say that we have to exactly copy a human brain to produce true intelligence, if that is what Knapp and Stross are thinking, is anthropocentric in the extreme. Did we need to copy a bird to produce flight? Did we need to copy a fish to produce a submarine? Did we need to copy a horse to produce a car? No, no, and no. Intelligence is not mystically different.

We already have a model for AI that is absolutely nothing like a human -- AIXI.

Being able to quantify the difference between human and machine intelligence would be helpful for machine learning, but I'm not sure why it would be absolutely necessary for any form of progress.

As for universal measures of intelligence, here's Shane Legg taking a stab at it:

Even if we aren't there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can't we agree that we should make responsible effort towards beneficial AI? Isn't that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn't as complicated and mystical as we had wished?

As to Anissmov’s second point, it’s definitely worth noting that computers don’t play “human-level” chess. Although computers are competitive with grandmasters, they aren’t truly intelligent in a general sense – they are, basically, chess-solving machines. And while they’re superior at tactics, they are woefully deficient at strategy, which is why grandmasters still win against/draw against computers.

This is true, but who cares? I didn't say they were truly intelligent in the general sense. That's what is being worked towards, though.

Now, I don’t doubt that computers are going to get better and smarter in the coming decades. But there are more than a few limitations on human-level AI, not the least of which are the actual physical limitations coming with the end of Moore’s Law and the simple fact that, in the realm of science, we’re only just beginning to understand what intelligence, consciousness, and sentience even are, and that’s going to be a fundamental limitation on artificial intelligence for a long time to come. Personally, I think that’s going to be the case for centuries.

Let's build a computer with true intelligence first, and worry about "consciousness" and "sentience" later, then.

23Jun/1126

Forbes Blogger Alex Knapp on “What is the Likelihood of the Singularity?”

Alex Knapp over at Forbes is writing a series of blog posts around Charles Stross' recent Singularity criticisms. Knapp goes after my last post pretty enthusiastically, so check it out.

Filed under: singularity 26 Comments
22Jun/1195

Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor's program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

"Human-equivalent AI is unlikely" is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I'll explain why in the next post.) Stross might not understand that the term "human-equivalent AI" always means AI of human-equivalent general intelligence, never "exactly like a human being in every way".

If Stross' objections turn out to be a problem in AI development, the "workaround" is to create generally intelligent AI that doesn't depend on primate embodiment or adaptations.

Couldn't the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

I don't get the point of the last couple sentences. Why not just pursue general intelligence rather than "enhancements to primate evolutionary fitness", then? The concept of having "motivations of its own" seems kind of hazy. If the AI is handing me my ass in Starcraft 2, does it matter if people debate whether it has "motivations of its own"? What does "motivations of its own" even mean? Does "motivations" secretly mean "motivations of human-level complexity"?

I do have to say, this is a novel argument that Stross is forwarding. Haven't heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is "unlikely", presumably indefinitely "unlikely". In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn't find much -- mainly just Dreyfuss' What Computers Can't Do and the people who argued against Kurzweil in Are We Spiritual Machines? "Human level AI is unlikely" is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.

(This is all aside from the gigantic can of worms that is the ethical status of artificial intelligence; if we ascribe the value inherent in human existence to conscious intelligence, then before creating a conscious artificial intelligence we have to ask if we're creating an entity deserving of rights. Is it murder to shut down a software process that is in some sense "conscious"? Is it genocide to use genetic algorithms to evolve software agents towards consciousness? These are huge show-stoppers — it's possible that just as destructive research on human embryos is tightly regulated and restricted, we may find it socially desirable to restrict destructive research on borderline autonomous intelligences ... lest we inadvertently open the door to inhumane uses of human beings as well.)

I don't think these are "showstoppers" -- there is no government on Earth that could search every computer for lines of code that are possibly AIs. We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos.

All it takes is for some people to build a "volitional" AI and there you have it. Even if 99% of AIs are tools, there are organizations -- like the Singularity Institute -- working towards AIs that are more than tools.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements. While such an AI might accidentally jeopardize its human's well-being, it's no more likely to deliberately turn on it's external "self" than you or I are to shoot ourselves in the head. And it's no more likely to try to bootstrap itself to a higher level of intelligence that has different motivational parameters than your right hand is likely to grow a motorcycle and go zooming off to explore the world around it without you.

YOU want AI to be like this. WE want AIs that do "try to bootstrap [themselves]" to a "higher level". Just because you don't want it doesn't mean that we won't build it.

21Jun/1190

Existential Risk Reduction Career Network

Reducing the probability of human extinction is more important than everything else, because humans are the only known source of "intelligence", "creativity", "values", and if we die, the universe is boring. No one in the future will care that you saw a funny movie. They will care if you helped Earth-originating intelligent life survive its self-destructive adolescent phase.

For those who wish to make their lives actually mean something, there's the existential risk reduction career network:

http://lesswrong.com/lw/4lg/existential_risk_reduction_career_network

Interested in donating to existential risk reduction efforts? Would you like to exchange career information with like-minded others? Then you should consider the Existential Risk Reduction Career Network! ("X Risk Network" for those short on time.) From the front page of the website:

"This network is for anyone interested in donating substantial amounts (relative to income) to non-profit organizations focused on the reduction of existential risk, such as SIAI, FHI, and the Lifeboat Foundation. [...] We are a community of people assisting each other to increase our resources available for contribution. Members discuss the strengths and weaknesses of different careers, network, share advice on job applications and career advancement, assist others with finding interviews, and occasionally look for qualified individuals to hire from within the network."

For more details, including on the process of requesting invitations, head on over to the front page at http://www.xrisknetwork.com/

Keep in mind that the network is for students as well, not just those currently on the job market. The network also has discussion of long term job strategy, school admissions, and intern possibilities.

Join an elite group of far-sighted individuals by contributing at least 5% of your income to existential risk reduction charities.

15Jun/118

“How to Pitch Articles” Now on H+ Magazine Website

My article on how to pitch articles to H+ magazine has been slightly improved and is now posted on H+ magazine.

Topics to inspire you:

  • How can the transhumanist philosophy be applied to daily life?
  • Quantified Self topics
  • Is change actually accelerating? If so, what is the evidence?
  • What technologies pose major risks and why?
  • What are the next steps for robotics and AI?
  • What is happening in genomics?
  • What is the future of energy?
  • Is culture getting friendlier to the future?
  • What will the year 2020 be like?
  • What will the year 2030 be like?
  • What will the year 2050 by like?
  • What will the year 2100 be like?
  • Book reviews (Robopocalypse)
  • Movie reviews (Limitless)
  • Conference/event reviews
  • Cool new businesses and initiatives in the transhumanist space
  • Philosophical issues
  • Other cultural commentary
  • Space, space stations, spaceships, satellites, planetary colonization
  • Topics similar to content in Scientific American and Popular Mechanics

Send your pitch ideas to editor@hplusmagazine.com. I look forward to seeing your ideas!

Filed under: meta, transhumanism 8 Comments
11Jun/1111

How to Pitch Articles to H+ Magazine

I'm the new Managing Editor at H+ magazine, which in practical terms means I need to come up with five good articles a week to publish. The magazine gets a lot of traffic so it's a good place to share information with other transhumanists.

1. Come up with an idea or coverage of a company/product/news story worth covering. Ideally you have had personal experience with the company/product/news story and are uniquely suited to write about it. If not, you should be ready to quote someone who has.

2. Send the pitch to editor@hplusmagazine.com. That goes into my inbox. Include links to samples of your other writing. (If you want to write articles for H+ magazine but haven't written serious blog posts yet, you might want to try that first.)

3. If you get the go-ahead, investigate the story, get a quote from an expert in the area you're writing about. Take notes. The article should primarily be reporting, not speculation or personal opinion. Editorials are welcome but harder to write than straightforward informative articles. If you do want to insert a little speculation, save it for the end.

4. Write the article. Between 500 and 1000 words is ideal. The less experienced you are at writing, the shorter and more concise it should be. Follow Singularity writing advice. Omit needless words. Remember the Most Important Writing Rule. Most likely, what you write will be boring not because you're stupid, but because you aren't bending over backwards far enough to please the audience. Make each sentence matter.

5. Use the inverse pyramid structure that is common for all news and magazine articles. The five Ws come first: who, what, where, when, and sometimes why and how. Then, the most important details of your story. Why should we care? That should be answered within two or three sentences of the beginning. Why is reading this article worth the reader's precious time? Why should I read this article in my free time instead of going hiking, visiting the beach, or reading something better-written? If your idea isn't good enough to occupy the reader's time, don't bother.

That's it! Follow these simple guidelines, and your article will be accepted and you will become famous overnight. Within the transhumanist community, anyway. :)

10Jun/111

Humanity+ Summer Fundraiser

Humanity+, which used to be known more descriptively (but less concisely and media-friendly) as the World Transhumanist Association, is running a fundraiser this summer:

Thanks to a generous matching grant by the Life Extension Foundation and other major donors, if we raise $15,000 independently, we will secure a total of $30,000 in funding for Humanity+ this summer, enabling the organization to shift into a higher gear. Any gift you make to Humanity+ will be matched dollar-for-dollar until July 31st.

Donate today!

http://www.humanityplus.org/match

Filed under: transhumanism 1 Comment
9Jun/113

Thanks for Adding Yourself to the Map

Thanks to everyone who is participating in the transhumanist collaborative map project, after just six days we have almost 100 pins on the map and over 20,000 views. I see that many people in the Bay Area and New York are being shy and not adding themselves...

Be sure to pass the link around to your friends who are transhumanists, so we can build a better picture of the movement worldwide. This is a very unique and foresightful group! We should learn a little more about one another.

Filed under: transhumanism 3 Comments
8Jun/1146

Steve Wozniak a Singularitarian?

Wozinak:

Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that "we're already creating the superior beings, I think we lost the battle to the machines long ago."

I always think of this guy when I go by Woz Way in San Jose.

So, if artificial intelligence can become smarter than humans, shouldn't we be concerned about maximizing the probability of a positive outcome, instead of just saying that AI will definitely do X and that there's nothing we can do about it, or engaging in some juvenile fantasy that we humans can directly control all AIs forever? (We can indirectly "control" AI by setting its initial conditions favorably, that is all we can do, the alternative is to ignore the initial conditions.)

3Jun/1134

Collaborative Map of Transhumanists Worldwide

Updating this map is a little tricky, you have to be invited as a collaborator by someone who already is one. If you know someone already on the map you can ask them for an invite, otherwise you have to fill in your email address in form below. Then you can also invite anyone else to collaborate, you just need their email address. I promise I won't sell it to spammers, this list is only for adding people to the map.




View Transhumanists Worldwide in a larger map

2Jun/113

Foresight @ Google: 25th Anniversary & Reunion Weekend

Interested in emerging technologies?
Fascinated by the potential in transformative nanotech?
Come explore the future with...

FORESIGHT@GOOGLE
25th Anniversary Conference Celebration and Reunion Weekend
Google HQ in Mountain View, CA
June 25-26 2011

A rockstar lineup includes keynotes:

• JIM VON EHR - Founder/President of Zyvex,
the world's first successful molecular nanotech company
• BARNEY PELL, PhD - Cofounder/CTO of Moon Express, competing for Google's Lunar X PRIZE

With speakers and panelists including:
• WILLIAM ANDREGG - Founder/CEO of Halcyon Molecular
• MIKE GARNER, PhD - Chair of ITRS Emerging Research Materials
• MIKE NELSON - CTO of NanoInk
• LUKE NOSEK - CoFounder of Paypal, Founders Fund Partner
• PAUL SAFFO, PhD - Wired, NYT-published strategist & forecaster
• SIR FRASER STODDART, PhD - Knighted for creation of molecular "switches" and a new field of nanochemistry
• THOMAS THEIS, PhD - IBM's Director of Physical Sciences

For the full speaker roster, as well as information on our exclusive 25th Anniversary Banquet, see our conference website:

http://www.foresight.org/reunion

Space is limited!

For $50 off, register now with the special discount code just for AF readers: ACCELERATING

I hope to see you all there!