First Reference to RSI in Fiction?

What follows is possibly the first reference to AI/robotic recursive self-improvement in fiction, from all the way back in 1935. Quote from Technovelgy:

In this story of a future Earth, humanity had all of its needs met by a device – an intelligent machine.

“You have forgotten your history, and you have forgotten the history of the Machine, humans…”

“On the planet Dwranl, of the star you know as Sirius, a great race lived, and they were not too unlike you humans. …they attained their goal of the machine that could think. And because it could think, they made several and put them to work, largely on scientific problems, and one of the obvious problems was how to make a better machine which could think.

The machines had logic, and they could think constantly, and because of their construction never forgot anything they thought it well to remember. So the machine which had been set the task of making a better machine advanced slowly, and as it improved itself, it advanced more and more …

Read More

AI and Effective Sagacity, by Mitchell Howe

In the field of AI, the supergoal is to create an information processing system that does something truly significant. (Whether this something is good, bad, of financial worth to a few, of world-ending importance to many, etc., depends upon who is doing the programming and how successful they are at it.) The seemingly essential subgoal that defines AI research is to create a system that can both learn and improve itself in self-reinforcing manner to eventually meet the end objective of significant action. Some minimal yet critical combination of software elegance and hardware capability is required to get to this point.

Discussion often lingers on the questions of how near to the capacity of the human brain such a system would need to be in order to meet this goal, or even to what degree of human brain might be required. I believe such questions are largely meaningless because they lose sight of the only supergoal – that such a system sustainably learn and improve, leading to eventual significant action.

Consider this in light of the debate about whether a …

Read More

Transvision 2007 Pictures

Click above to see my images (and a few by others) of the Transvision 2007 conference, held last week in Chicago. Regrettably, I left my camera charger at home so I only caught the first half of the conference. George Dvorsky posted his photos here. There is also a video of Ray Kurzweil’s acceptance speech for the H.G. Wells award, given each year to an outstanding transhumanist. Previous winners include Aubrey de Grey, Ramez Naam, and Charlie Stross.

Read More

New Blogs on Accelerating Future

Visit new blogs on the Accelerating Future domain:

Black Belt Bayesian Life, the Universe, and Everything The authors are Steven and Tom. Feel free to welcome them in the comments. Here are my favorite recent posts from Black Belt Bayesian and Life, the Universe, and Everything:

Transhumanist Buzzword Bingo (I really love this) Star Trek as Bad Futurism Speedrunning Through Life Optimization processes Reduction to QED Friendly AI Must Be Designed

Both these guys are extremely bright and committed to transhumanism. I’m pleased to have them as a part of Accelerating Future’s growing family of bloggers.

Read More

Do Humans Have the Right to Enhance Themselves?

This poll was on CNN a while ago. I don’t remember the article associated with it.

Interesting results. I discussed a similar CNN poll here last June.

My answers are “Yes, but within limits”, and somewhere in between #1 and #2 for the second question. I welcome some limits, such as limits on the rate that entities are allowed to reproduce. For more on why this is necessary, see The Future of Human Evolution by philosopher Nick Bostrom.

When significant transhumanist technologies become available and start percolating throughout the population, some may opt to live in human-only societies. Should certain segments of humanity be allowed to ban all augmented persons and enhancement prosthetics from their countries?

Read More

The Word “Singularity” Has Lost All Meaning

Yes, it’s come to that point. The word “Singularity” has been losing meaning for a while now, but whatever semblance of a unified or coherent definition there ever used to be, it has long faded away over the horizon. Rather than any single idea, Singularity has become a signifier used to refer to a general cluster of ideas, some interrelated; some, blatantly not. These ideas include: exponential growth, transhuman intelligence, mind uploading, singletons, popularity of the Internet, feasibility of life extension, some developmentally predetermined “next step in human evolution”, feasibility of strong AI, feasibility of advanced nanotechnology, some odd spiritual-esque transcension, and whether or not human development is primarily dictated by technological or social forces. Quite frankly, it’s a mess.

Anytime someone gets up in front of an audience and starts trying to talk about the “Singularity” without carefully defining exactly what they mean and don’t mean, each audience member will be thinking of an entirely different set of concepts, draw their own opinions from that unique set, and interpret further things they hear in light of that particular opinion, …

Read More

Top 10 Transhumanist Technologies

Transhumanists advocate the improvement of human capacities through advanced technology. Not just technology as in gadgets you get from Best Buy, but technology in the grander sense of strategies for eliminating disease, providing cheap but high-quality products to the world’s poorest, improving quality of life and social interconnectedness, and so on. Technology we don’t notice because it’s blended in with the fabric of the world, but would immediately take note of its absence if it became unavailable. (Ever tried to travel to another country on foot?) Technology needn’t be expensive – indeed, if a technology is truly effective it will pay for itself many times over.

Transhumanists tend to take a longer-than-average view of technological progress, looking not just five or ten years into the future but twenty years, thirty years, and beyond. We realize that the longer you look forward, the more uncertain the predictions get, but one thing is quite certain: if a technology is physically possible and obviously useful, human (or transhuman!) ingenuity will see to it that it gets built eventually. As we …

Read More

Kaj Sotala: Why Care About Artificial Intelligence?

Kaj Sotala, a fellow supporter of both the Lifeboat Foundation and Singularity Institute, has published a new article, “Why care about artificial intelligence?” to follow up on his “Artificial intelligence within our lifetime?” article, which I covered in March.

The main thrust of the article is that AIs could potentially be much, much more powerful than human beings, and therefore we have an important stake in how their motivational systems are constructed. The main talking points are:

Artificial intelligences can do everything humans can Limitations of the human mental architecture Limitations of the human hardware Comparative human/AI evolution and initial resources Considerations and implications of superhuman AI Controlling AI: Enabling factors Controlling AI: Limiting factors Immense risks, immense benefits Summary and implications

Also recently published by Kaj on his site are the works, “Transhumanism: Happiness, Equality, Choice”, …

Read More