Michael Wilson on AGI Funding

On AGIRI’s general mailing list, Michael Wilson of Bitphase AI, Ltd., responds to the question, “how can you tell when an AGI project is worth investing in?”:

There have been many, many well funded AGI projects in the past, public and private. Most of them didn’t produce anything useful at all. A few managed some narrow AI spinoffs. Most of the directors of those projects were just as confident about success as Ben (Goertzel) and Peter (Voss) are. All of them were wrong. No-one on this list has produced any evidence (publically) that they can succeed where all previous attempts failed other than cute powerpoint slides – which all the previous projects had too. All you can do judge architecture by the vague descriptions given, and the history of AI strongly suggests that even when full details are available, even so-called experts completely suck at judging what will work and what won’t. The chances of arbitrary donors correctly ascertaining what approaches will work are effectively zero. The usual strategy is to judge by hot buzzword count and apparent …

Read More

Predictability of AI

From complexity theorist Richard Loosemore on the AGI list:

It is entirely possible to build an AI in such a way that the general course of its behavior is as reliable as the behavior of an Ideal Gas: you can’t predict the position and momentum of all its particles, but you sure can predict such overall characteristics as temperature, pressure and volume.

Without any sophisticated theory of minds in general, predicting the future behavior of any given artificial intelligence can seem impossible – who’s to say that it won’t reprogram itself arbitrarily at any time, if it has the capacity to do so?

The issue is that capacity does not necessarily signify desire. In humans, desire comes from our evolutionary history – every desire, no matter how seemingly unrelated, evolved because it contributed somehow to our inclusive fitness. Art, literature, philosophy, gossip – few people realize that these domains of human endeavor are in fact evolutionarily programmed subgoals of the evolutionary supergoal: the increase of inclusive fitness, which encompasses both our ability to survive and give birth to children that …

Read More

Green Goo a La Mode

On Nobel Intent, minimal genomes are being discussed. The organisms in question are endosymbionts, free-floating cells that take up residence inside of animal cells, forming a symbiotic relationship. Apparently some of these species have extremely tiny genomes:

[...] there’s a second paper on the endosymbiont in a related species, a psyllid, that makes the first genome look big. In this case, the bacterial genome has been whittled down into an extremely gene-rich 166 Kilobases with 182 genes. Over 97 percent of that genome codes for something; in fact, nearly a full percent of it codes for parts of two genes at once.

What do I take away from this? Well, aside from general scientific interest, I think that the successful existence of minimal genome organisms in nature shows us how low of a complexity threshold will be necessary to engineer green goo. That is, artificial variants of natural organisms with much greater physical performance, such that they will be capable of entirely displacing the original population. If other organisms are dependent on the displaced organism …

Read More

What is the Singularity?

“What is the Singularity?” is the Singularity Institute’s introduction to the Singularity, written by Eliezer Yudkowsky in 2002. According to many, it’s the best introduction to the Singularity out there. As I thought a reminder would be helpful, I’m posting the document here in its entirety:

“The Singularity is the technological creation of smarter-than-human intelligence. There are several technologies that are often mentioned as heading in this connection. The most commonly mentioned is probably Artificial Intelligence, but there are others; direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity – several different technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.

A future that contains smarter-than-human minds is genuinely different in a way that goes beyond the usual visions of a future filled with bigger and better gadgets. Vernor Vinge …

Read More

Paul Phillips on the Singularity

World-famous poker player Paul Phillips, nicknamed “Dot-Com”, has won over $2,200,000 playing poker live. Here’s what he has to say about the Singularity on his blog:

More and more, I have come to believe that the future of the human race hangs on one thing and one thing only: whether we can reach the singularity before the enemies of civilization gain enough traction to plunge the entire planet into dystopia. And more and more I fear we are going to lose the race. Kurzweil has predicted for a long time the singularity will arrive around 2040 and I think this is as good a prediction as can be made, but it depends on the continued application of the law of accelerating returns. A few well placed nukes would push the ETA back more than a bit. And if enough of the underpinnings of civilization are smashed, there will be no chance.

My sincere belief that this race is the ONLY thing that matters with respect to the future of our species is why I don’t …

Read More

A Nuclear Reactor in Every Home

Sometime between 2020 and 2040, we will invent a practically unlimited energy source that will solve the global energy crisis. This unlimited source of energy will come from thorium. A summary of the benefits, from a recent announcement of the start of construction for a new prototype reactor:

There is no danger of a melt-down like the Chernobyl reactor. It produces minimal radioactive waste. It can burn plutonium waste from traditional nuclear reactors. It is not suitable for the production of weapon grade materials. Global thorium reserves could cover our energy needs for thousands of years.

If nuclear reactors can be made safe and relatively cheap, how popular could they get?

It depends on how cheap we’re talking about. Most reactor designs utilize thorium use molten salt (or lead) as a coolant. Even though they were developed as early as 1954, molten salt-coolant reactors are a relatively immature technology. Interestingly enough, the first nuclear reactor to provide usable amounts of electricity was a molten salt reactor. Three were built as part of the …

Read More

After NK test, what can be done to reduce nuclear threat?

Via Eurekalert:

Scholars and policy analysts examine global security questions

In the wake of the announcement of a nuclear test by North Korea, new questions have been raised about proliferation and the threat of nuclear terrorism. Is nuclear terrorism preventable? What steps has the United States already taken to avoid a nuclear catastrophe and what steps should be taken in the future?

Scholars, scientists, and policymakers, including Graham Allison, Sam Nunn, and William Perry, address these crucial questions in articles that are currently available online in the September volume of SAGE Publication’s The ANNALS of The American Academy of Political and Social Science. The volume is edited by Allison of the Belfer Center for Science and International Affairs, John. F. Kennedy School of Government, Harvard University.

Of particular interest in light of North Korea’s claim that it has conducted a nuclear test are Allison’s article “Flight of Fancy,” which traces the chain of events a Korean nuclear test might set in motion, Perry’s article “Proliferation of the Peninsula: Five North Korean Nuclear Crises,” Sam Nunn’s “The Race between …

Read More

Hiroshima resets “peace clock” after NK nuclear test

From the Pink Tentacle:

The Hiroshima Peace Memorial Museum’s Peace Watch Tower, which records the number of days since the last nuclear test, was reset on October 10, one day after North Korea conducted an underground nuclear test.

The peace clock’s two digital displays show the number of days since the US atomic bombing of Hiroshima and the number of days since the last nuclear test was conducted. Before being reset on Monday, the clock read 40, the number of days since the US conducted a subcritical nuclear test at the end of August.

The clock was set up on August 6, 2001 on the 56th anniversary of the 1945 U.S. atomic bombing of Hiroshima. Over the past 5 years, the clock has been reset 11 times following each of the nuclear tests conducted by the US (some in cooperation with the UK) and Russia.

Museum director Koichiro Maeda says, “We are concerned that more nations will start to believe their national security can be strengthened by possessing nuclear weapons. It is extremely foolish.” The museum is …

Read More

Defining the Singularity

From a recent email to the Singularity mailing list:

The Singularity definitions being presented here are incredibly confusing and contradictory. If I were a newcomer to the community and saw this thread, I’d say that this word “Singularity” is so poorly defined, it’s useless. Everyone is talking past each other. As Nick Hay has pointed out, the Singularity was originally defined as smarter-than-human intelligence, and I think that this definition remains the most relevant, concise, and resistant to misinterpretation.

It’s not about technological progress. It’s not about experiencing an artificial universe by being plugged into a computer. It’s not about human intelligence merging with computing technology. It’s not about things changing so fast that we can’t keep up, or the accretion of some threshold level of knowledge. All of these things might indeed follow from a Singularity, but might not, making it important to distinguish between the possible effects of a Singularity and what the Singularity actually is. The Singularity actually is the creation of smarter-than-human intelligence, but there are many speculative scenarios about what would happen thereafter …

Read More

North Korea Must be Stopped

So North Korea thinks they can test nuclear weapons now… just great. Even though it was a huge dud, just like some of their missile launches, this event is nothing short of a disaster, one of the biggest of the century thus far. North Korea is one of the biggest arms dealers in the world, and its leader is totally insane – significantly more insane than Iran’s Ahmadinejad, who is loved by millions of moderate and intelligent Iranian citizens. In North Korea, if you are caught speaking out against the government, you are sent to the gulag to suffer, along with three generations of family closest to you. So if a college-age kid speaks out against the regime, his parents and grandparents get to be worked to death too.

It is thought that as many as a million of North Korea’s 23 million people are imprisoned in these camps. Saddam Hussein may have killed hundreds of thousands, and silenced those who spoke out against him, but he did not maintain an institution of suffering of this size …

Read More

Response to “What is friendly?”

Over at the Streeb-Greebling diaries, Bob Mottram watched the Google video on the Risks of AGI panel and writes,

In this video a panel of luminaries discuss the future risks which advanced forms of AI might pose. Much hinges upon the idea of “friendliness”, and trying to ensure that decisions made by powerful intelligences will always be somewhat in tune with human desires. The elephant in the room here though is that there really is no good definition for what qualifies as “friendly”. What’s a good decision for me, might not be a good decision for someone else. When humans make decisions they’re almost never following Asimov’s zeroth law.

Asimov’s zeroth law is “A robot may not injure humanity, or, through inaction, allow humanity to come to harm.”

It’s not really “an elephant in the room”. There is a common definition for “friendly”, and it is accepted by many in the field:

“A “Friendly AI” is an AI that takes actions that are, on the whole, beneficial to humans and humanity; benevolent rather than malevolent; nice …

Read More

Fantastic New Paper by Jason Matheny on Extinction Risk

An area of study more important than any other is that of extinction risks. An average-intelligence person devoting their life to the study and mitigation of existential risks can accomplish far more ethical good than lifetimes of work by thousands of the best and brightest politicians, scientists, writers, and programmers. Morality-wise, it’s a pursuit that blows all others out of the water. Why? Because the negative value represented by the possibility of existential disaster is much greater in magnitude than all the other evils in the world, including poverty, torture, disease, and tyranny. We can’t make a better world if we’re dead.

If our species survives this century and goes on to colonize the stars, the people who were instrumental in minimizing the probability of risk during this century will deserve a lot of the credit. If you choose to devote your life to mitigating existential risk and actually end up having a significant impact, you could actually be famous for the rest of eternity. Think about that!

This is why it’s of such massive importance whenever a new paper …

Read More