Say that the mind were non-physical, metaphysical, or whatever. Still, we know that physical brains give rise to minds, so mass-producing physical brains would still allow us to mass-produce non-physical minds. So, pure reductionism is not even necessary to carry the point I was making in the previous post.
The key discovery of human history is that minds are ultimately mechanical, operate according to physical principles, and that there is no fundamental distinction between the bits of organic matter that process thoughts and bits of organic matter elsewhere. This is called reductionism (in the second sense):
Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents. This can be said of objects, phenomena, explanations, theories, and meanings.
This discovery is interesting because it implies that 1) minds, previously thought to be mystical, can in principle be mass-produced in factories, 2) the human mind is just one possible type of mind and can theoretically be extended or permuted in millions of different ways.
Because of the substantial economic, creative, and moral value of intelligent minds relative …
I am fascinated by the possibility of using fullerenes to build eternal structures. If not eternal, extremely long-lasting. Fullerenes already exist today. See?
Above are aggregated diamond nanorods (ADNRs). The name “hyperdiamond” recently appeared to describe this material.
ADNRs, a type of fullerene (any molecule made entirely out of carbon), is the hardest and least compressible known material. Its bulk modulus, meaning resistance to compression, is 491 gigapascals (GPa), beating diamond which is only about 445 GPa. For comparison, the bulk modulus of steel is 160 GPa, glass is 30 GPa, and bone is just 15 GPa.
What else? This black stuff:
Look how dark it is. Something made out of that would be hard to see at night. Also, its melting point would be several thousand degrees.
The image above shows one of the longest nanotube forests ever created. The nanotubes are about 8 mm long.
An article I often point people to is “Why We Need Friendly AI”, an older (2004) article by Eliezer Yudkowsky on the challenge of Friendly AI:
There are certain important things that evolution created. We don’t know that evolution reliably creates these things, but we know that it happened at least once. A sense of fun, the love of beauty, taking joy in helping others, the ability to be swayed by moral argument, the wish to be better people. Call these things humaneness, the parts of ourselves that we treasure â€“ our ideals, our inclinations to alleviate suffering. If human is what we are, then humane is what we wish we were. Tribalism and hatred, prejudice and revenge, these things are also part of human nature. They are not humane, but they are human. They are a part of me; not by my choice, but by evolution’s design, and the heritage of three and half billion years of lethal combat. Nature, bloody in tooth and claw, inscribed each base of my DNA. That is the tragedy of the …
I thought this was interesting.
The recent discovery of super-Earths (masses less or equal to 10 earth-masses) has initiated a discussion about conditions for habitable worlds. Among these is the mode of convection, which influences a planet’s thermal evolution and surface conditions. On Earth, plate tectonics has been proposed as a necessary condition for life. Here we show, that super-Earths will also have plate tectonics. We demonstrate that as planetary mass increases, the shear stress available to overcome resistance to plate motion increases while the plate thickness decreases, thereby enhancing plate weakness. These effects contribute favorably to the subduction of the lithosphere, an essential component of plate tectonics. Moreover, uncertainties in achieving plate tectonics in the one earth-mass regime disappear as mass increases: super-Earths, even if dry, will exhibit plate tectonic behaviour.
A new paper by Eliezer Yudkowsky is online on the SIAI publications page, “Complex Value Systems are Required to Realize Valuable Futures”. This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.
Abstract: A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome”, despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI …
From Seth Baum:
Global catastrophic risks (GCR) are risks of events that could significantly harm or even destroy human civilization at the global scale. GCR is related to the concept of existential risk, which is risk of events that would cause humanity to no longer exist. (Note that Nick Bostrom, who coined the term existential risk, defines it in a slightly different way.) Prominent GCRs include climate change, nuclear warfare, pandemics, and artificial general intelligence. Due to the breadth of the GCRs themselves and the issues that GCRs raise, the study of GCR is quite interdisciplinary.
According to a range of ethical views, including my views, reducing GCR should be our top priority as individuals and as a society. In short, if a global catastrophe occurs, then not much else matters, since so much of what we might care about (such as human wellbeing, the wellbeing of non-human animals, or the flourishing of ecosystems) would be largely or entirely wiped out by the catastrophe. The details about prioritizing GCR are a bit more complicated than this (and are …
Robert Ettinger, a hero among many transhumanists for fathering the cryonics movement, has been cryopreserved at age 92 in Clinton Township, Michigan. He died on Saturday, July 23.
Ettinger’s 1962 book The Prospect of Immortality and 1972 book Man into Superman inspire many transhumanists to think beyond the “inevitability” of death.
Ben Best was quoted by KurzweilAI.net on the suspension:
â€œRobert Ettinger deanimated [Saturday] at around 4 p.m. Eastern Time,â€ said Ben Best, president of the Cryonics Institute. â€œHe was under hospice care and had an ice bath sitting by his bedside. His pronouncement and initiation of cooling was very rapid. The perfusion went well and he is now in the cooling box. Much more later.â€
Ettinger’s 1962 book was a turning point in human history. It represented the first time when people acquired the ambition to preserve …
From SIAI blog:
The Singularity Institute is proud to announce the expansion of our research efforts with our new Research Associates program!
Research associates are chosen for their excellent thinking ability and their passion for our core mission. Research associates are not salaried staff, but we encourage their Friendly AI-related research outputs by, for example, covering their travel costs for conferences at which they present academic work relevant to our mission.
Our first three research associates are:
Vladimir Nesov, a decision theory researcher, holds an M.S. in applied mathematics and physics from Moscow Institute of Physics and Technology. He helped Wei Dai develop updateless decision theory, in pursuit of one of the Singularity Institute core research goals: that of developing a ‘reflective decision theory.’
Peter de Blanc, an AI …
1. Amusing Ourselves to Death 2. Ten Futuristic Materials 3. Top 10 Transhumanist Technologies 4. Brain-Computer Interfaces for Manipulating Dreams 5. The Benefits of a Successful Singularity 6. Six Places to Nuke for Multiplier Effects 7. Response to Charles Strossâ€™ â€œThree arguments against the Singularityâ€ 8. How Can I Incorporate Transhumanism into my Daily Life? 9. A Nuclear Reactor in Every Home 10. Wish 11. Terraformed Mars 12. Why “Transhumanism” is Unnecessary 13. Hard Takeoff Sources 14. X-Seed 4000 15. Kurzweil’s 2009 Predictions 16. The Illusion of Control in an Intelligence Amplification Singularity 17. Collaborative Map of Transhumanists Worldwide 18. Continuing Discussion with Mr. Knapp at Forbes 19. Paul Graham’s Disagreement Hierarchy 20. The Final Weapon
I haven’t read this, I’m just posting it because other people are talking about it.
Ray Kurzweil, the prominent inventor and futurist, can’t wait to get nanobots into his brain. In his view, these devices will be equipped with a variety of sensors and stimulators and will communicate wirelessly with computers outside of the body. In addition to providing unprecedented insight into brain function at the cellular level, brain-penetrating nanobots would provide the ultimate virtual reality experience.