Thursday 31 January 2013

The Cambridge Project for Existential Risk


December 5th, 02012 by Alex Mensing


Human technology is undoubtedly getting more powerful every year, and our destructive potential is no exception. The Cold War notion of ‘mutually assured destruction‘ was unthinkable for most of human history, as was the ability to fundamentally alter the climate of the planet on which we rely. As the capabilities of our technologies continue to grow, what are the ways in which we become increasingly able to bring about our own demise as a species?

Martin Rees and Huw Price of the University of Cambridge and the Skype founder Jaan Tallinn teamed up to investigate and mitigate that very possibility. In founding the Centre for the Study of Existential Risk at Cambridge University, they explain their motivation:

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake
.
Rees, Huw and Tallinn agree that scientists need to pay more attention to this issue. The long-term future of humanity is at stake, and we need to understand more clearly the power that we wield in the modern world, and how to avoid using it destructively. The issue can, in fact, be extended beyond our own species. As Stewart Brand concluded his summary of co-founder Martin Rees’ SALT talkNow that we are stewards of this planet, we are responsible for maintaining life’s possibilities in this cosmic neighborhood.
This entry was posted on Wednesday, December 5th, 02012 at 10:02 am and is filed under Futures, Long Term Thinking, Technology.
Ideas about Long-term Thinking.


The Lunar 02013


December 13th, 02012 by Charlotte

The universe may be governed by quantum probability and uncertainty, but we can nevertheless predict the movements of bodies in our solar system with relative accuracy. For a preview of how the Moon will behave in 02013, this video offers an animated choreography of its phases and libration as it ellipses around our planet.
And for detailed information about specific dates of your choosing, NASA offers this handy tool.
This entry was posted on Thursday, December 13th, 02012 at 7:43 am and is filed under "Long Shorts", The Big Here.

Brain Preservation Now!


Analysis by John Smart
Mr. Smart presents a strong case for brain preservation research in his latest post “Preserving the Self for Later Brain Emulation: What Features Do We Need?” The interesting thing about this post is that it analyzes three distinct information-processing layers in the brain — electrical, short-term chemical, and long-term molecular changes — and how brain plastination might preserve the necessary information enough of these processes to comprehensively preserve memory and personality. This is the most comprehensive analysis of the requirements for brain preservation I have seen yet.
The Brain Preservation Foundation also administers the Brain Preservation Technology Prize, which offers over $100,000 to the first group to preserve a whole human brain in such a way that all the relevant ultrastructure is preserved for the long term.
I commend the efforts of Mr. Smart and Dr. Hayworth, and encourage you to look into supporting their work. The positive impact on society resulting from reliable brain preservation could be among the top benefits we can obtain prior to the Singularity.

Think Twice: A Response to Kevin Kelly on ‘Thinkism’

The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'Translate'. The maximum string content length quota (30720) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 2, position 31674.
The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'Translate'. The maximum string content length quota (30720) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 31873.

In late 2008, tech luminary Kevin Kelly, the founding executive editor of Wired magazine, published a critique of what he calls “thinkism” — the idea of smarter-than-human Artificial Intelligences with accelerated thinking and acting speeds developing science, technology, civilization, and physical constructs at faster-than-human rates. The argument over “thinkism” is important to answering the question of whether Artificial Intelligence could quickly transform the world once it passes a certain threshold of intelligence, called the “intelligence explosion” scenario.

Kelly begins his blog post by stating that “thinkism doesn’t work”, specifically meaning that he doesn’t believe that a smarter-than-human Artificial Intelligence could rapidly develop infrastructure to transform the world.  After using the Wikipedia definition of the Singularity, Kelly writes that Vernor Vinge, Ray Kurzweil and others view the Singularity as deriving from smarter-than-human Artificial Intelligences (superintelligences) developing the skills to make themselves smarter, doing so at a rapid rate. Then, “technical problems are quickly solved, so that society’s overall progress makes it impossible for us to imagine what lies beyond the Singularity’s birth”, Kelly says. Specifically, he alludes to superintelligence developing the science to cure the effects of human aging faster than they accumulate, thereby giving us indefinite lifespans. The notion of the Singularity is roughly that the creation of superintelligence could lead to indefinite lifespans and post-scarcity abundance within a matter of years or even months, due to the vastly accelerated science and robotics that superintelligence could develop. Obviously, if this scenario is plausible, then it might be worth devoting more resources to developing human-friendly Artificial Intelligence than we are currently. A number of eminent scientists are beginning to take the scenario seriously, while Kelly stands out as an interesting critic.

Kelly does not dismiss the Singularity concept out of hand, saying “I agree with parts of that. There appears to be nothing in the composition of the universe, or our minds, that would prevent us from making a machine as smart as us, and probably (but not as surely) smarter than us.” However, he then rejects the hypothesis, saying, “the major trouble with this scenario is a confusion between intelligence and work. The notion of an instant Singularity rests upon the misguided idea that intelligence alone can solve problems.” Kelly quotes the Singularity Institute article, “Why Work Towards the Singularity”, arguing it implies an “approach [where] one only has to think about problems smartly enough to solve them.” Kelly calls this “thinkism”.

Kelly brings up concrete examples, such as curing cancer and prolonging life, stating that these problems cannot be solved by “thinkism.” “No amount of thinkism will discover how the cell ages, or how telomeres fall off”, Kelly writes. “No intelligence, no matter how super duper, can figure out how human body works simply by reading all the known scientific literature in the world and then contemplating it.” He then highlights the necessity of experimentation in deriving new knowledge and working hypotheses, concluding that, “thinking about the potential data will not yield the correct data. Thinking is only part of science; maybe even a small part.”

Part of Kelly’s argument rests on the idea that there are fixed-rate external processes, such as the metabolism of a cell, which cannot be sped up to provide more experimental data than they would otherwise. He explains, that “there is no doubt that a super AI can accelerate the process of science, as even non-AI computation has already sped it up. But the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up.” He also uses physics as an example, saying “If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 smarter than they are now, without a Collider, they will know nothing new.” Kelly acknowledges the potential of computer simulations but argues they are still constrained by fixed-rate external processes, noting, “Sure, we can make a computer simulation of an atom or cell (and will someday). We can speed up this simulations many factors, but the testing, vetting and proving of those models also has to take place in calendar time to match the rate of their targets.”

Continuing his argument, Kelly writes: “To be useful artificial intelligences have to be embodied in the world, and that world will often set their pace of innovations. Thinkism is not enough. Without conducting experiements, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears. The rate of discovery will hopefully be significantly accelerated. Even better, a super AI will ask questions no human would ask. But, to take one example, it will require many generations of experiments on living organisms, not even to mention humans, before such a difficult achievement as immortality is gained.”

Concluding, Kelly writes: “The Singularity is an illusion that will be constantly retreating — always “near” but never arriving. We’ll wonder why it never came after we got AI. Then one day in the future, we’ll realize it already happened. The super AI came, and all the things we thought it would bring instantly — personal nanotechnology, brain upgrades, immortality — did not come. Instead other benefits accrued, which we did not anticipate, and took long to appreciate. Since we did not see them coming, we look back and say, yes, that was the Singularity.”

This fascinating post of Kelly’s raises many issues, the two most prominent being:

1) Given sensory data X, how difficult is it for agent Y to come to conclusion Z?
2) Can experimentation be accelerated past the human-familiar rate or not?

These will be addressed below.

There are many interesting examples in human history of situations where people “should” have realized something but didn’t. For instance, the ancient Egyptians, Greeks, and Romans had all the necessary technology to manufacture hot air balloons, but apparently never thought of it. It wasn’t until 1783 that the first historic hot-air balloon flew. It is possible that ancient civilizations did build hot-air balloons and left no archeological evidence of their remains. One hot air balloonist thinks the Nazca lines were viewed by prehistoric balloonists. My guess would be that the ancients might have been clever enough to manufacture hot air balloons, but probably not. The point is that they could have built them, but didn’t.

Inoculation and vaccination are another relevant example. A text from 8th century India included a chapter on smallpox and mentioned methods of inoculating against the disease. Given that the value of inoculation was known in India c. 750 BC, it would seem that the modern age of vaccination should have arrived prior to 1796. Besides safe water, vaccines reduce mortality and increase population growth more than any other means. Aren’t 2,550 years enough time to go from the basic principle of inoculation to the notion of systematic vaccination? It could be argued that the discovery of cell theory (1665) was a limiting factor; if cell theory had been introduced to 8th century Indians, perhaps they would have been able to develop vaccines and save the world from hundreds of millions of unnecessary deaths.

Lenses, which are no more than precisely curved pieces of glass, are fundamental to scientific instruments: the microscope and the telescope and are at least 2,700 years old; the Nimrud lens, discovered at the Assyrian palace of Nimrud in modern-day Iraq, demonstrates their antiquity. The discoverer of the lens noted that he had seen very small inscriptions on Assyrian artifacts that made him suspect that a lens was used to create them. There are numerous references to and evidence of lenses in antiquity. The Visby lenses found in a 11th to 12th century Viking town are sophisticated aspheric lenses with angular resolution of 25–30 µm. Even after lenses became widespread in 1280, it took microscopes almost 500 years to develop to the point of being able to discover cells. Given that lenses are as old as they are, why did it take so incredibly long for our ancestors to develop them to the point of being able to build microscopes and telescopes?

A final example that I will discuss regards complex gear mechanisms and analog computers in general. The Antikythera mechanism, dated to 100 BC, consists of about 30 precisely interlocked bronze gears designed to display the locations in the sky of the Sun, Moon, and the five planets known at the time. Why wasn’t it until more than 1,400 years later that mechanisms of similar complexity were constructed? At the time, Greece was a developed civilization of about 4-5 million people. It could be that a civilization of sufficient size and stability to produce complex gear mechanisms did not come into existence until 1,400 years later. Perhaps a simple lack of ingenuity is to blame. The exact answer is unknown, but we do know that the mechanical basis for constructing bronze gears of similar quality existed for a long time, it just wasn’t put into use.

It apparently takes a long time for humans to figure some things out. There are numerous historic examples where all the pieces of a puzzle were on the table, there was just no one who put them together. The perspective of “thinkism” suggests that if the right genius were alive at the right time, he or she would have put the pieces together and given civilization a major push forward. I believe that this is borne out by contrasting the historical record with what we know today.

It takes a certain amount of information to come to certain conclusions. There is a minimum amount of information required to identify an object, plan a winning strategy in a game, model someone’s psychology, or design an artifact. The more intelligent or specialized the agent is, the less information it needs to reach the conclusion. Conclusions may be “good enough” rather than perfect, in other words, “ecologically rational”.

An example is how good humans are at recognizing faces. The experimental data shows that we are fantastic at this; in one study, half of respondents correctly identified this image as being a portrait of Napoleon Bonaparte, even though it is only a mere 6×7 pixels.

MIT computational neuroscientist Pawan Sinha found that given 12 by 14 pixels worth of visual information, his experimental subjects could recognize 75-percent of the face images in a set accurately, where the set had a mix of faces and other objects. Sinha also programmed a computer to identify face images, with a high success rate. A New York Times article quotes Dr. Sinha: “These turn out to be very simple relationships, things like the eyes are always darker than the forehead, and the mouth is darker than the cheeks,” Dr. Sinha said. “If you put together about 12 of these relationships, you get a template that you can use to locate a face.” There are already algorithms that can identify faces from databases which only include a single picture of an individual.

These results are relevant because they are examples where humans or software programs are able to make correct judgments with extremely small amounts of information, less than we would intuitively think is necessary. The picture of Napoleon above can be specified by about 168 bits. Who would imagine that hundreds of people in an experimental study could uniquely identify a historic individual based on a photo containing only 168 bits of information? It shows that humans have cognitive algorithms that are highly effective and specialized at identifying such information. Perhaps we could make huge scientific breakthroughs if we had different cognitive algorithms specialized at engaging unfamiliar, but highly relevant data sets.

The same could apply to observations and conclusions of all sorts. The amount of information needed to make breakthroughs in science could be less than we think. We do know that new ways of looking at the world can make a tremendous difference in uncovering true beliefs. A civilization without science might exist for a long time without accumulating significant amounts of objective knowledge about biology or physics. For instance, the Platonic theory of classical elements persisted for thousands of years.

Then, science came along. In the century following the development of the Scientific Method by Francis Bacon in 1620, there was rapid progress in science and technology, fueled by this new worldview. By 1780, the Industrial Revolution was in full swing. If the Scientific Method had been invented and applied in ancient Greece, progress that would have seemed mind-boggling and impossible at the time, like the Industrial Revolution, could have potentially been achieved within a century or two. The Scientific Method increased the marginal usefulness of each new piece of knowledge humanity acquired, giving it a more logical and epistemologically productive framework than was accessible in the pre-scientific haze.

Could there be other organizing principles of effective thinking analogous to the Scientific Method that we’re just missing today? It seems hard to rule it out, and quite plausible. The use of Bayesian principles in inference, which has led to breakthroughs in Artificial Intelligence, would be one candidate. Perhaps better thinkers could discover such principles more rapidly than we can, and make fundamental breakthroughs with less information than we would currently anticipate being necessary.

A key factor defining feats of intelligence or cleverness is surprise. Higher intelligence sees the solution no one else saw, looks past the surface of a problem to find the general principles and features that allow them to understand and resolve it. A classic, if cliché example is Albert Einstein deriving the principles of special relativity working as a patent clerk in Bern, Switzerland. His ideas were considered radically counterintuitive, but proved correct. The concept of the speed of light being constant for all observers regardless of their velocity had no precedent in Newtonian physics or common sense. It took a great mind to think about the universe in a completely new way.

Kelly rejects the notion of superintelligence leading to immortality when he says, “this super-super intelligence would be able to use advanced nanotechnology (which it had invented a few days before) to cure cancer, heart disease, and death itself in the few years before Ray had to die. If you can live long enough to see the Singularity, you’ll live forever [...] The major trouble with this scenario is a confusion between intelligence and work.” Kelly highlights “immortality” as being very difficult to achieve through intelligence and its fruits alone, but this understanding is relative. Medieval peasants would see rifles, freight trains, and atomic bombs as very difficult to achieve. Stone Age man would see bronze instruments as difficult to achieve, if he could imagine them at all. The impression of difficulty is relative to intelligence and the tools a civilization has. To very intelligent agents, a great deal of tasks might seem easy, including vast categories of tasks that less intelligent agents cannot even comprehend.

Would providing indefinite lifespans (biological immortality) to humans be extremely difficult, even for superintelligences? Instead of saying “yes” based on the evidence of our own imaginations, we must confess that we don’t know. This doesn’t mean that the probability is 50% — it means we really don’t know. We can come up with a tentative probability, say 10%, and iterate based on evidence that comes in. But to say that it will not happen with high confidence is impossible, because a lesser intelligence cannot place definite limits (outside of, perhaps, the laws of physics) on what a higher intelligence or more advanced civilization can achieve. To say that it will happen with high confidence is also impossible, because we lack the information.

The general point is that one of the hallmarks of great intelligence is surprise. The discovery of gunpowder must have been a surprise. The realization that the earth orbits the Sun and not vice versa was a surprise. The derivation of the laws of motion and their universal applicability was a surprise. The creation of the steam engine led to surprising results. The notion that we evolved from apes surprised and shocked many. The idea that life was not animated by a vital force but in fact operated according to the same rules of chemistry as everything else was certainly surprising. Mere human intelligence has surprised us time and time again with its results — we should not be surprised to be surprised again by higher forms of intelligence, if and when they are built.

One of Kelly’s core arguments is that experimentation to derive new knowledge and the “testing, vetting and proving” of computer models will require “calendar time”. However, it is possible to imagine ways in which the process of experimentation and empirical verification could be accelerated to faster-than-human-calendar speeds.

To start, consider variance in the performance of human scientists. There are historic examples of times where scientific and technological progress was very rapid. The most recent and perhaps striking example was during World War II. Within six years, the following technologies were invented: radar, jet aircraft, ballistic missiles, nuclear power and weapons, and general-purpose computers. So, despite fixed-rate external processes limiting the rate of experimentation, innovation was temporarily accelerated anyway. Intuitively, the rate of innovation was arguably three to four times greater than in a similar period before the war. Though the exact factor is subjective, few historians would disagree that rapid scientific innovation occurred during WWII.

Why was this? Several factors may be identified: 1) increased military spending on research, 2) more scientists due to better training connected to the war effort, 3) researchers working harder and with more motivation than they otherwise would, 4) second-order effects resulting from larger groups of brilliant people interacting with one another in a supportive environment, as in the Manhattan Project.

An advanced Artificial Intelligence could employ all these strategies to accelerate its own speed of research and development. It could 1) amass a large amount of resources in the form of physical and social capital, and spend them on research, 2) copy itself thousands or millions of times using available computers to ensure there are many researchers, 3) possess perfect patience, perpetual alertness, and accelerated thinking speed to work harder than human researchers can, and 4) benefit from second-order effects by utilizing electronic communication between its constituent researcher-agents. To the extent that accelerated innovation is possible with these strategies, an Artificial Intelligence could exploit them to the fullest degree possible.

Of course, experimentation is certainly necessary to make scientific progress — many revolutions in science begin with peculiar phenomena that are artificially magnified with the aid of carefully designed experiments. For instance, the double-slit experiment in quantum mechanics emphasizes the wave-particle duality of light, a phenomenon not typically observed during everyday circumstances. Determining the details of how different chemicals intermingle to produce reaction products has required millions of experiments. Understanding biology has required many millions of experiments as well. Only strictly observational facts such as the cellular structure of life or the surface features of the Moon can be assessed through direct observation. To determine how metabolic processes actually work or what is underneath the surface of the moon requires experimentation, trial and error.

There are four concrete ways in which experimentation might be accelerated to speeds beyond the typical human level. These are conducting experiments faster, more efficiently, conducting them in parallel, and choosing the most useful experiments to begin with. Kelly argues that “the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up”. But, this is not entirely clear. It should be possible to build chemical networks that simulate cellular processes operating more quickly than cellular metabolisms do. In addition, it is not clear that a comprehensive understanding of cells would be necessary to achieve biological immortality. Achieving indefinite biological lifespans could be more readily achieved by repairing cellular damage and chemical junk faster than it accumulates than constantly keeping all cells in a state of perpetual youth, which seems to be what Kelly is implying is necessary. In fact, it may be possible to develop therapies for repairing the damages of aging with our current biological knowledge. Since we aren’t superintelligences, it is impossible to tell. But Kelly makes an error when he assumes that keeping all cells in a state of perpetual youth, or total understanding, is required for indefinite lifespans. This shows how even small differences in knowledge between humans can make an all-important difference in research targets and agendas. The difference in knowledge between humans and superintelligences will make the difference larger still.

Considering these factors highlights the earlier point that the perceived difficulty of a given advance, like biological immortality, is strongly influenced by the framing of the necessary prerequisites to achieve that advance, and the intelligence doing the evaluation. Kelly’s framing of the problem is that massive amounts of biological experimentation would be necessary to derive the knowledge to repair the body faster than it breaks down. This may be the case, but it might not be. A higher intelligence might be able to achieve equivalent insights with ten experiments that a lesser intelligence would require a thousand experiments to uncover.

The rate of useful experimentation by superhuman intelligences will depend on factors such as 1) how much data is needed to make a given advance and 2) whether experiments be accelerated, simplified, or made massively parallel.

Research in biology, medicine, and chemistry have exploited highly parallel robotic systems for experiments. This field is called high-throughput screening (HTS). One paper describes a machine that simultaneously introduces 1,536 compounds to 1,536 assay plates, performing 1,536 chemical experiments at once in a completely automated fashion, determining 1,536 dose-response curves per cycle. Only 23 nanoliters of each compound is transferred. This highly miniaturized, highly parallel, high-density mode of experimentation has only begun to be exploited due to advances in robotics. If robotics could be manufactured on a massive scale more cheaply, one can imagine warehouses full of such machines conducting many hundreds of millions of experiments simultaneously.

Another method of accelerating experimentation would be to improve microscale manufacturing and to construct experiments using the minimum possible quantity of matter. For instance, instead of dropping weights off the Leaning Tower of Pisa, construct a microscale vacuum chamber and drop a cell-sized diamond grain in that chamber. Thousands of physics experiments could be conducted in the time it would require to conduct one experiment by the traditional method. With better sensors, you can conduct an experiment on ten cells that with inferior sensors would necessitate a million cells. More fine-grained control of matter can allow an agent to extract much more information from a smaller experiment that costs less and can be run faster and massively parallel. It is conceivable that an advanced Artificial Intelligence could come up with millions of hypotheses and test them all simultaneously in one small building.

In his 1992 paper defining the Singularity, Vernor Vinge called the hypothetical post-Singularity world “a regime as radically different from our human past as we humans are from the lower animals”. Kelly, meanwhile, said that for artificial intelligences to amass scientific knowledge and make breakthroughs (like biological immortality) would require detailed models, and the “testing, vetting and proving of those models” requires “calendar time”. These models will “take years, or months, or at least days, to get results”. Since the comparison between different species is sometimes seen as a model for plausible differences between humans and superintelligences, let’s apply that model to the context of experiments that Kelly is referring to. Do humans create effects in the world faster than squirrels? Yes. Are humans qualitatively better at working towards biological immortality than squirrels? Yes. Do humans have a fundamentally superior understanding of the universe than squirrels do? It would be safe to say that we do.

The comparison with squirrels sounds absurd because concepts like biological immortality and “understanding the universe” are fuzzy at best from the perspective of a squirrel. Analogously, there may be stages in comprehension of reality that are fundamentally more advanced than our own and only accessible to higher intelligences. In this way, the “calendar time” of humans would have no more meaning to a superintelligence than “squirrel time” has relevance to human life. It’s not so much a factor of time — though higher intelligences can do much more in much less time — but also the general category of thoughts which can be processed, objectives which can be imagined, and plans which can be achieved. The objectives and methods of a higher intelligence would be on a completely different level than those of a lower intelligence and are different in kind, not degree.

There are several reasons why it makes sense to assume that qualitatively smarter-than-human intelligence, that is, qualitative differences on the order of difference between humans and squirrels or greater, should be possible. The first reason concerns the relative speed of human neurons relative to artificial computing machinery. Modern computers operate at billions of serial operations per second. Human neurons operate at only a couple hundred serial operations per second. Since most acts of cognition must occur within one second to be evolutionarily useful, and must include redundancy and fault tolerance, the brain is constrained to problem solutions involving 100 serial steps or less. What about the universe of possible solutions to cognitive tasks that require more than 100 serial steps? If the computer you are using had to implement every meaningful operation in 100 serial steps, the vast majority of common algorithms used today would have to be thrown out. In the space of possible algorithms, it quickly becomes obvious that constraining a computer to 100 serial steps is an onerous limitation. Expanding this space by a factor of ten million seems likely to lead to significant qualitative improvements in intelligence.

The reason that qualitatively smarter-than-human intelligence is possible is about neurological hardware and software. There are relatively few hardware differences between humans and chimpanzee brains. The evidence actually supports the notion that primate brains are more distinct from non-primates than humans are from other primates, and that the human brain is merely a primate brain scaled up for a larger body and with an enlarged prefrontal cortex. One quantitative study of human vs. chimpanzee brain cells came to this conclusion:

Despite our ongoing efforts to understand biology under the light of evolution, we have often resorted to considering the human brain as an outlier to justify our cognitive abilities, as if evolution applied to all species except humans. Remarkably, all the characteristics that appeared to single out the human brain as extraordinary, a point off the curve, can now, in retrospect, be understood as stemming from comparisons against body size with the underlying assumptions that all brains are uniformly scaled-up or scaled-down versions of each other and that brain size (and, hence, number of neurons) is tightly coupled to body size. Our recently acquired quantitative data on the cellular composition of the human brain and its comparison to other brains, both primate and nonprimate, strongly indicate that we need to rethink the place that the human brain holds in nature and evolution, and to rewrite some basic concepts that are taught in textbooks. The human brain has just the number of neurons and nonneuronal cells that would be expected for a primate brain of its size, with the same distribution of neurons between its cerebral cortex and cerebellum as in other species, despite the relative enlargement of the former; it costs as much energy as would be expected from its number of neurons; and it may have been a change from a raw diet to a cooked diet that afforded us its remarkable number of neurons, possibly responsible for its remarkable cognitive abilities.

In other words, it appears as if our exceptional cognitive abilities are the direct result of having more neurons rather than neurons in differing arrangements or relative quantities. If this continues to be confirmed in subsequent analyses, it implies, all else equal, that scaling up the number of neurons in the human brain could lead to similar intelligence differentials as those between humans and chimps. Given the evidence above, this should be our default assumption — we would need specific reasoning or evidence to assume otherwise.

A more detailed reason for why qualitatively smarter-than-human intelligence seems possible is that the higher intelligence of humans and primates appears to have something to do with self-awareness and complex self-referential loops in thinking and acting. The evolution of primate general intelligence appears correlated with the evolution of brain structures that control, manipulate, and channel the activity of other brain structures in a contingent way. For instance, a region called the pulvinar was called the brain’s “switchboard operator” in a recent study, though there are dozens of brain areas that could be given this description. Of 52 Brodmann areas in the cortex, at least seven are “hub areas” which lie near the top of a self-reflective control hierarchy: areas 8, 9, 10, 11, 12, 25, and 28. Given that these areas obviously play important roles in what we consider higher intelligence, yet only evolved relatively recently in evolutionary terms and are comparatively poorly developed, it is quite plausible to suggest that there is a lot of room for improvement in these areas and that qualitative intelligence improvements could result.

Imagine a brain that has “hub areas” which can completely reprogram other brain modules on a fine-grained level, the sort of reprogramming and flexibility only currently available in computers. Instead of only being able to reprogram a few percent of the information content of our brains, like we have now, a mind that can reprogram 100 percent of its own information content would allow limitless room for fast, flexible cognitive adaptation. Such a mind could quickly reprogram itself to suit the task at hand. Biological intelligences can only dream of this kind of adaptiveness and versatility. It would open up a vast new space not only for functional cognition but also appreciation of aesthetics and other higher-order mental traits.

Say that we could throw open the hood of the brain and enhance it. How would that work?

To understand how “smarter than human intelligence” would work requires overviewing how the brain works. The brain is a very complicated machine. It operates entirely according to the laws of physics, and includes specific modules designed to handle different tasks. Look at our capabilities of identifying faces; it is clear that our brains have specific neural hardware adapted to rapidly identifying human faces. We don’t have the same hardware for rapidly identifying lizard faces — every lizard is just a lizard. To a lizard, different lizard faces might intuitively appear highly distinct, but to us humans, a species wherein there is no adaptive value in differentiating lizard faces, they all look the same.

The paper “Intelligence Explosion: Evidence and Import” by Luke Muehlhauser and Anna Salamon reviews some features of what Eliezer Yudkowsky calls the “AI Advantage” — inherent advantages that an Artificial Intelligence would have over human thinkers as a natural consequence of its digital properties. Because many of these properties are so key to understanding the “cognitive horsepower” behind claims of “thinkism”, I’ve chosen to excerpt the entire section on “AI Advantages” here, minus references (you can find those in the paper):

Below we list a few AI advantages that may allow AIs to become not only vastly more intelligent than any human, but also more intelligent than all of biological humanity. Many of these are unique to machine intelligence, and that is why we focus on intelligence explosion from AI rather than from biological cognitive enhancement.

Increased computational resources. The human brain uses 85–100 billion neurons. This limit is imposed by evolution-produced constraints on brain volume and metabolism. In contrast, a machine intelligence could use scalable computational resources (imagine a “brain” the size of a warehouse). While algorithms would need to be changed in order to be usefully scaled up, one can perhaps get a rough feel for the potential impact here by noting that humans have about 3.5 times the brain size of chimps, and that brain size and IQ correlate positively in humans, with a correlation coefficient of about 0.35. One study suggested a similar correlation between brain size and cognitive ability in rats and mice.

Communication speed. Axons carry spike signals at 75 meters per second or less. That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly. (Of course, this also depends on the efficiency of the algorithms in use; faster hardware compensates for less efficient software.)

Increased serial depth. Due to neurons’ slow firing speed, the human brain relies on massive parallelization and is incapable of rapidly performing any computation that requires more than about 100 sequential operations. Perhaps there are cognitive tasks that could be performed more efficiently and precisely if the brain’s ability to support parallelizable pattern-matching algorithms were supplemented by support for longer sequential processes. In fact, there are many known algorithms for which the best parallel version uses far more computational resources than the best serial algorithm, due to the overhead of parallelization.

Duplicability. Our research colleague Steve Rayhawk likes to describe AI as “instant intelligence; just add hardware!” What Rayhawk means is that, while it will require extensive research to design the first AI, creating additional AIs is just a matter of copying software. The population of digital minds can thus expand to fill the available hardware base, perhaps rapidly surpassing the population of biological minds. Duplicability also allows the AI population to rapidly become dominated by newly built AIs, with new skills. Since an AI’s skills are stored digitally, its exact current state can be copied, including memories and acquired skills—similar to how a “system state” can be copied by hardware emulation programs or system backup programs. A human who undergoes education increases only his or her own performance, but an AI that becomes 10% better at earning money (per dollar of rentable hardware) than other AIs can be used to replace the others across the hardware base—making each copy 10% more efficient.

Editability. Digitality opens up more parameters for controlled variation than is possible with humans. We can put humans through job-training programs, but we can’t perform precise, replicable neurosurgeries on them. Digital workers would be more editable than human workers are. Consider first the possibilities from whole brain emulation. We know that transcranial magnetic stimulation (TMS) applied to one part of the prefrontal cortex can improve working memory. Since TMS works by temporarily decreasing or increasing the excitability of populations of neurons, it seems plausible that decreasing or increasing the “excitability” parameter of certain populations of (virtual) neurons in a digital mind would improve performance. We could also experimentally modify dozens of other whole brain emulation parameters, such as simulated glucose levels, undifferentiated (virtual) stem cells grafted onto particular brain modules such as the motor cortex, and rapid connections across different parts of the brain. Secondly, a modular, transparent AI could be even more directly editable than a whole brain emulation—possibly via its source code. (Of course, such possibilities raise ethical concerns.)

Goal coordination. Let us call a set of AI copies or near-copies a “copy clan.” Given shared goals, a copy clan would not face certain goal coordination problems that limit human effectiveness. A human cannot use a hundredfold salary increase to purchase a hundredfold increase in productive hours per day. But a copy clan, if its tasks are parallelizable, could do just that. Any gains made by such a copy clan, or by a human or human organization controlling that clan, could potentially be invested in further AI development, allowing initial advantages to compound.

Improved rationality. Some economists model humans as Homo economicus: self-interested rational agents who do what they believe will maximize the fulfillment of their goals. On the basis of behavioral studies, though, Schneider (2010) points out that we are more akin to Homer Simpson: we are irrational beings that lack consistent, stable goals. But imagine if you were an instance of Homo economicus. You could stay on a diet, spend the optimal amount of time learning which activities will achieve your goals, and then follow through on an optimal plan, no matter how tedious it was to execute. Machine intelligences of many types could be written to be vastly more rational than humans, and thereby accrue the benefits of rational thought and action. The rational agent model (using Bayesian probability theory and expected utility theory) is a mature paradigm in current AI design.

It seems likely to me that Kevin Kelly does not really understand the AI advantages of increased computational resources, communication speed, increased serial depth, duplicability, editability, goal coordination, and improved rationality, and how these abilities could be used to accelerate, miniaturize, parallelize, and prioritize experimentation to such a degree that the “calendar time” limitation could be surpassed. The calendar of a powerful AI superintelligence might be measured in microseconds rather than months. Different categories of beings have different calendars to which they are most accustomed. In the time it takes for a single human neuron to fire, a superintelligence might have decades of subjective time to contemplate the mysteries of the universe.

Part of the initial insight that prompted the perspective that Kelly calls “thinkism” was that the brain is a machine which can be accelerated by porting the crucial algorithms on a different substrate, namely a computer, and running them faster. The brain works through algorithms — that is, systematic procedures. For an example, take the visual cortex, the part of the brain that processes what you see. This region of the brain is actually relatively well understood. The first layers capture surface features such as lines, darkness, and light. Deeper layers make out shapes, then motion, then specifics such as which face belongs to which person. It gets so specific that scientists have measured individual neurons that recognize celebrities like Bill Clinton or Marilyn Monroe.

The algorithms that underlie our processing of visual information are understood on a basic level, and it is only a matter of time until all the other cognitive algorithms are understood as well. When they are, they will be implemented on computers and sped up by a factor of thousands or millions. Human neurons fire 200 times every second, computer chips fire 2,000,000,000 times every second.

What would it be like to be a mind running at ten million times human speed? If your mind is really really fast, events on the outside would seem really really slow. All the elapsed time from the founding of Rome to the present day could be experienced subjectively in about two hours. All the time from the emergence of Homo sapiens to the present day could experienced in a week. All the time from the dinosaurs to the present day could be experienced in a mere 2,400 years. Imagine how quickly a mind could accrue profound wisdom running at such an accelerated speed; the “wisdom” of a 90-year old would seem childlike by comparison.

To visualize concretely the kind of arrangement in which these minds could exist, imagine a computer a couple hundred feet across made of dense nanomachinery situated at the bottom of the ocean. Such a computer would have far more computing power than the entire planet today, similar to the way that a modern smartphone has more computing power than the entire world in 1960. Within this computer would exist virtual worlds practically without end; their combined volume far exceeding that of the solar system, or perhaps even the galaxy.

In his post, Kelly seems to acknowledge that minds could be vastly accelerated and magnified in this way: remarkably, he just doesn’t think that this would translate to increased wisdom, performance, ability, or insight significantly beyond the human level. To me, at first impression, the notion that a ten million times speedup would have a negligible effect on scientific innovation or progress seems absurd. It appears obvious that it would have a world-transforming impact. Let’s look at the argument more closely.

The disagreement between Singularitarians such as Vinge and Kurzweil and skeptics such as Kelly seems to be about what sorts of information-acquisition and generation procedures can be imported into this vastly accelerated world and which cannot. In his hard sci-fi book Diaspora, author Greg Egan calls the worlds of these enormously accelerated minds “polises”, which make up the vast bulk of humanity in 2975. Vinge and Kurzweil see the process of knowledge acquisition and creation as being something that can in principle be sped up, brought “within the purview of the polis”, whereas Kelly does not.

Above, I argued how the benefits of experimentation can be accelerated through the processes of running experiments faster, parallelizing them, using less matter, and choosing the right experiments. But what about less controversial information flow from world to polis? To build the polis to begin with, you’d have to be able to emulate — not just simulate — the human mind in detail, that is, copy all of its relevant properties. Since the human brain is one of, if not the most complex object in the universe that we know of, this also implies that a vast variety of less complex objects could be scanned and inputted to the polis in a similar fashion. Trees, for instance, could be mass-inputted into the virtual environment of the polis, consuming thousands or millions of times less computing power than the sentient inhabitants. It goes without saying that nonbiological, inanimate background features such as landscapes could be input into the polis with a bare minimum of challenge or difficulty.

Once a process can be simulated with a reasonable level of computing power, it can be inputted into the polis and run at a factor of tens-of-millions speedup. Newtonian physics, for instance. Today, we use huge computers to perform molecular dynamics simulations on aggregates of a few hundred atoms, simulating a few microseconds of their activity. With futuristic nanocomputers built by superintelligent Artificial Intelligences, macro-scale systems could be simulated for hours of activity for a very affordable cost in computing power. Such simulations would allow these intelligences to extract predictive regularities, or “rules of thumb” which would allow them to avoid simulating these systems in such excruciating detail in the future. Instead of requiring full-resolution molecular dynamics simulations to extrapolate the behavior of large systems, they might resolve a set of several thousand generalities that allow these systems to be predicted and understood with a high degree of confidence. This has essentially been the process of science for hundreds of years, but the “simulations” are instead direct observations. With enough computing power, fast simulations can be “similar enough” to real-life situations that genuine wisdom and insight can be derived from them.

Though real, physical experimentation will be needed to verify the performance of models, those facets of the models that are verified will be quickly internalized by the polis, allowing it to simulate real-world phenomena at millions of times the real-world speed. Once a facet of a real-world system is internalized, understanding it instantly becomes a matter of routine, just as today the design of a massive bridge has become a matter of routine, a factor of running calculations based on the known laws of physics. Though from our current perspective, the complexities of the world of biology seem intimidating, the capability of superintelligences to quickly conduct millions of experiments in parallel and internalize knowledge once it is acquired will quickly dissolve these challenges as our recent ancestors dissolved the challenge of precision engineering.

I have only scratched the surface of the reasons why innovation and progress by superintelligences will predictably surpass the “calendar time” with which humanity has grown so accustomed. As humans routinely perform cognitive feats that bewilder the brightest squirrel or meadow vole, superintelligent scientists and engineers will leave human scientists and engineers in the dust, as if our all prior accomplishments were scarcely worth mentioning. It may be psychologically challenging to come to terms with such a possibility, but it would really just be the latest in an ongoing trend of human vanity being upset by the realities of a godless cosmos.

The Singularity is something that our generation needs to worry about — in fact, it may be the most important task we face. If we are going to create higher intelligence, we want it on our side. The benefits of success would be beyond our capacity to imagine, and will likely include the end of scarcity, war, disease, and suffering of all kinds, and the opening up of a whole new cognitive and experiential universe. The challenge is an intimidating one, but one that our best will rise to meet.

A future interest to work towards

Translate Request has too much data Parameter name: request Translate Request has too much data Parameter name: request
There isn’t enough in the world.

Not enough wealth to go around, not enough space in cities, not enough medicine, not enough intelligence or wisdom. Not enough genuine fun or excitement. Not enough knowledge. Not enough solutions to global problems.

What we need is more
. And we need it soon. The world population is doubling every 34 years. Instead of turning back the clock, we must move towards the future.
There is a bare minimum that we should demand out of the future. Without this bare minimum, we’re just running in place. Here is what I think that minimum is:

1) More space
2) More health
3) More water
4) More time
5) More intelligence


First off, we need more space. There are seven billion people on this planet.

There is actually a lot of space on this earth. About 90 million square kilometers of land isn’t covered in snow or mountains. That’s about 5,000 times larger than the New York City metro area. Less than 1% of this land has any appreciable population density. Everywhere outside of Europe, there are vast districts the size of Texas with no more than a few thousand people. The world is “crowded” because of logistics, not space.

The main constraints on space are transportation and infrastructure rather than lack of actual land. Most population centers are around the coast and its natural harbors. Is this because the rest of the land is uninhabitable? No. It’s because being on the coast drives the local economy. What if you can drive the economy without the coast?

With better technologies, we can decentralize infrastructure and spread out more. The most important factors are energy and water. If you can secure these and cheap transportation, many areas can be made habitable.

For energy, the only way to get around centralized solutions is to make your own. Looking forward 10-20 years, this means solar panels. Only solar panels have the versatility needed to work anywhere. Full-spectrum solar panels can generate energy even if the sky is grey. Right now, the main barrier is cost, but the cost of solar panels has been dropping by 7% annually for the last 30 years. If this trend continues, by 2030 solar electricity will cost half that of coal electricity.

The other limitation is transportation. Physical distance creates expense and stress. Yet, better technologies over the last hundred years have revolutionized transportation and completely changed the face of cities. The vast majority of adults either drive or use efficient mass transit systems. A hundred years ago, we used horses. In twenty years, we will use self-driving cars. In forty years, better navigational AI and nanotech will allow aircars. Flying cars will definitely be developed — they will just need to be piloted by software programs smart enough to do all the work.

To spread out without destroying the environment, our manufacturing processes will have to be made clean. There are two ideal ways to go about this: growing products with synthetic biology and molecular manufacturing. In the event that both of these methods prove intractable, advanced robotics alone will allow for highly automated and precise manufacturing processes without waste.

The planet is not crowded! Our technology just sucks. More efficient technology is also better for the environment. This is not a choice. We either develop the technology we need to live anywhere, or suffer in increasingly cramped cities.


Human health is the foundation of everything.

Human health, however, is sorely lacking. Those in developing countries suffer from terrible diseases, while many in developed countries are overweight and cannot exert themselves. Only the wealthiest 1% in the world can afford healthy, diverse, flavorful foods. 150,000 die per day from age-related disease. 20,000 from heart disease, 17,000 from stroke, 3,450 from traffic accidents, 3,400 die from malaria.

What can cure these maladies? Science and medicine.

The key to medicine is making people that don’t get sick to begin with. Many of the plagues on human health can be viewed as special cases of the general problem that the cells of the body are not reprogrammable or replaceable. The body naturally reprograms and replaces cells, but is eventually overcome. We must amplify the natural ability of the body to deal with disease and the ravages of aging. There are two ways in which this may be thoroughly accomplished: artificial cells or microscale robots.

Tiny machines called MEMS have already been implanted into the human body thousands of times, and have a wide range of desirable properties for medicine. To augment the immune system, these machines will have to be much more sophisticated. Robert Freitas has designed a wide variety of microscale machines for improving human health, including artificial red and white blood cells. To fabricate these machines will require nanoscale manufacturing.

An artificial cell with a non-standard design might be made impervious to pathogens, which rely on certain biological universals which could be modified in artificial cells. Cells artificially produced using the patient’s genetic code would be at home in the body and provide superior disease immunity and longevity. Artificial stem cells could be introduced to tissue to produce these new cells indefinitely. Yet, artificial cells do not currently exist. PACE, programmable artificial cell evolution, a project funded by the EU in 2004-2008, did some interesting work, but a true programmable artificial cell is still a ways off. Given the tremendous demand there would be for such cells, their eventual development seems highly likely. Cells are already being genetically reprogrammed for a variety of purposes in plants and animals. Artificial cells are the next step. If we can reprogram our own cells quickly, disease can be averted, possibly completely.


The most crucial necessity of life is water. Millions suffer and die without it. Vast tracts of good land are empty and dead because of its absence. In areas, good water can be expensive. Some geopolitical experts foresee wars based on water.

Though you might think that developed countries like the United States have the issue of water squared away, we certainly don’t. Here’s an example: this spring, it looked like the US corn crop was going to be a record-breaker. By late July, a combination of drought and heat caused the US corn crop to shrivel and experience a 10-year low. US corn is the foundation on which much of the world’s food supply is based. Cheaper and more plentiful water could have saved the corn from drought. Furthermore, water demand is expected to exceed in supply in more than 10 US cities by 2050.

Civilization is closely tied to the plentiful availability of fresh water. Modern societies cannot exist without it. Because water is such a foundational aspect of human existence, technologies that increase its availability can improve quality of life greatly. Making water more available would also allow us to colonize more remote places, addressing the issue of open space.

A few examples of currently existing water technologies that could be game-changing include nano-filters, machines that extract water from the air, and waterproof sand. And once you have water, you can usually grow food.

Perhaps the most exciting water technology are machines that extract water from air, called atmospheric water generators. Darpa put millions towards getting these developed, and after years of no progress, there was finally a breakthrough. These devices were only invented in 2006. A $50,000 machine can extract 10,000 liters of water from the air a day! In arid regions such as the deserts of Iraq, a similar machine can extract 2,200 gallons a day. A machine that costs a mere $1,300 can extract 20 liters a day — enough for plenty of applications. A machine that extracts 5,000 liters a day costs $170,000, and a version that runs entirely on solar power — no power input needed! — is $250,000. This is brand new stuff, and very exciting. It’s enough to make an oasis in the middle of a desert. The Sahara Forest Project is doing exactly that.


No matter how much wealth or happiness or friends we have, we all get old and die. We need more time.

Eventually death is inevitable, but it staving it off for as long as possible seems like a good plan. In ancient Rome, the average lifespan was 28. In 1900 in the US, the average lifespan was only 47. By 2010, it was 78. This means that the average lifespan during the 20th century increased more than a quarter of a year per year!

This didn’t happen by magic — it happened through science. Vaccines, antibiotics, modern agriculture, and many hundreds of thousands of facets of modern medicine were all developed throughout the 20th century. And the process isn’t slowing down. The longer we live, the longer we continue to live. Someone born in 1980 may expect to live 70 years, to 2050, but if lifespans continue lengthening at the historic rate, that person’s expected lifespan would actually be 100, allowing them to live all the way to 2080!

Life is a beautiful thing. The human body is just a complex machine. If it can be repaired faster than it breaks down, we could live very long lives — perhaps hundreds of years, maybe even thousands. There are no fundamental principles of nature preventing this from happening. We are just taught a lot of comforting lies about the metaphysical meaning of death to make it easier to swallow. The zeitgeist gives our current lifespans a level of inherent mystique and meaning they don’t actually have.

Our bodies break down and age for seven clearly definable reasons: cancer, mutations in mitochondria, junk inside cells, junk outside cells, cell loss, cells losing the ability to divide, and extracellular crosslinks. The last of these was discovered in the 1970s, and not a single additional source of age-related damage has been identified in all of medicine since then, so it seems likely that this list is comprehensive. If we can “just” solve these problems, then we may find ourselves in a society where people only die from diseases, war, or accidents. If that could be achieved, the average lifespan could be 800-900 or more, depending on the frequency of war and disease.

“Immortality” is actually not all that rare in nature. A feature on turtles in the June 2002 issue of Discover magazine asked, “Can Turtles Live Forever?”, explored the possibility that turtles do not age conventionally, but simply die due to accidents and disease that affect turtles at all ages equally. An influential monograph published in 2008 developed the theory behind this in detail. Caleb Finch, a professor in the neurobiology of aging at USC, has proposed rockfish, turtles, and bristlecone pine as candidate species that do not age.

It could be that we are not far from beginning to develop cures for the major causes of aging. Within a few years, the genomes for all the candidate species exhibiting very long lifespans will be sequenced, and we will gain insight into what gives these species such long lives. There may be certain proteins or metabolic tricks that they utilize to stave off age-related decline. If these are identified, they could lead to drugs or other therapies that radically extend human lifespans. Not all age-related damage needs to be repaired for organisms to live indefinitely — damage must simply be repaired at a faster rate than it accumulates. Thus, people living in a society where average lifespan is extended by more than one year per year would enjoy indefinite lifespans, even though no “elixir of immortality” or any such thing had been developed. Some of us alive today might live to enjoy such a society.

For more scientific detail on this, see an article I wrote in 2009.


If we had all the space, health, water, and lifespan in the world, what would we be missing? Intelligence. Not just intelligence as in book smarts, but intelligence in the more sublime sense of understanding each other and the world around us on a visceral level. “Compassion” is a sub-category of the kind of intelligence I am talking about.

We tend to think of “intelligence” as running on a scale from village idiot to Einstein, but in reality, all of humanity is just a little dot on a huge scale of intelligence ranging from worms to posthuman superintelligences. The fact that our species found itself at this particular level of intelligence is just a cosmic accident. If our planet were a more dangerous place, humans would have been forced to evolve higher levels of intelligence just to cope with the perils of leaving the forest. Why can we store only 4-9 items in working memory and not 27-30? The answer is that we live on an arbitrary planet and an arbitrary level of intelligence was reached by humanity which enabled us to build a civilization.

Why can’t we reprogram our own brains? Why didn’t we launch the Industrial Revolution 100,000 years ago instead of 350 years ago? Why don’t we immediately “get” complex concepts? We contrive mystical-sounding reasons for explaining away our characteristic level of intelligence as a species, and rarely even think about it because everyone in the species has the same limitations.

These limitations need not last forever. Imagine being able to perceive 50-dimensional objects, or colors in the infrared and ultraviolet ranges. Imagine being able to appreciate the subtle connections between millions of different domains of art or science rather than a few dozen. In principle, all of this could be possible. We’d have to augment our brains somehow, possibly with brain-computer interfaces, or maybe through more organic approaches. This is a line of research that is already in progress, and interesting results are being achieved every year.

Although the concept of brain-computer interfaces makes some of us squirm, the brain-computer interfaces of the future would have to be non-invasive and safe to be practical at all. To interface with millions of micron-sized neurons, a system would have to be delicate and sophisticated. It may be possible to coax the natural gene expression networks in the brain to produce more neurons or configure them in better arrangements. What nature gave us is not necessarily the most ideal brain or mind — just what was practical for it at the time. We should regard the intellect of Homo sapiens
as a good first draft — but improvements on that draft are inevitable.

I hope that this has been an informative overview of what the future could offer you.

People alive today are different than the generations that come before us — we have greater expectations of the world and reality itself. Instead of merely surviving, we strive towards a higher cosmic purpose rooted in science and logic instead of superstition and dogma. Science and technology are giving us the tools to create a paradise on Earth. We can use them for that, or use them to blow each other to smithereens. The choice is ours.

Wednesday 30 January 2013

Comprehensive Copying Not Required for Uploading

AppId is over the quota
AppId is over the quota

Recently, there was some confusion by biologist P.Z. Myers regarding the Whole Brain Emulation Roadmap report of Anders Sandberg and Nick Bostrom at the Future of Humanity Institute.

The confusion arose when Prof. Myers made incorrect assumptions about the 130-page roadmap from reading a 2-page blog post by Chris Hallquist. Hallquist wrote:

The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.

If this process manages to give you a sufficiently accurate simulation

Prof. Myers objected vociferously, writing, “It won’t. It can’t.”, subsequently launching into a reasonable attack against the notion of scanning a living human brain at nanoscale resolution with current fixation technology. The confusion is that Prof. Myers is criticizing a highly specific idea, the notion of exhaustively simulating every axon and dendrite in a live brain, as if that were the only proposal or even the central proposal forwarded by Sandberg and Bostrom. In fact, on page 13 of the report, the authors present a table that includes 11 progressively more detailed “levels of emulation”, ranging from simulating the brain using high-level representational “computational modules” to simulating the quantum behavior of individual molecules. In his post, Myers writes as if the 5th level of detail, simulating all axons and dendrites, is the only path to whole brain emulation (WBE) proposed in the report (it isn’t), and also as if the authors are proposing that WBE of the human brain is possible with present-day fixation techniques (they aren’t).

In fact, the report presents Whole Brain Emulation as a technological goal with a wide range of possible routes to its achievement. The narrow method that Myers criticizes is only one approach among many, and not one that I would think is particularly likely to work. In the comments section, Myers concurs that another approach to WBE could work perfectly well:

This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.

But, the report does not mandate that a “brute force dismantling and reconstruction of every cell in the brain” is the only way forward for uploading. This makes it look as if Myers did not read the report, even though he claims, “I read the paper”.

Slicing and scanning a brain will be necessary but by no means sufficient to create a high-detail Whole Brain Emulation. Surely, it is difficult to imagine how the salient features of a brain could be captured without scanning it in some way.

What Myers seems to be objecting to is a kind of dogmatic reductionism, “brain in, emulation out” direct scanning approach that is not actually being advocated by the authors of the report. The report is non-dogmatic, writing that a two-phase approach to WBE is required, where “The first phase consists of developing the basic capabilities and settling key research questions that determine the feasibility, required level of detail and optimal techniques. This phase mainly involves partial scans, simulations and integration of the research modalities.” In this first phase, there is ample room for figuring out what the tissue actually does. Then, that data can be used simplify the scanning and representation process. The required level of understanding vs. blind scan-and-simulate is up for debate, but few would claim that our current neuroscientific level of understanding suffices.

Describing the difficulties of comprehensive scanning, Myers writes:

And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue?

Measuring the epigenetic state of every nucleus is not likely to be required to create convincing, useful, and self-aware Whole Brain Emulations. No neuroscientist familiar with the idea has ever claimed this. The report does not claim this, either. Myers seems to be inferring this claim himself through his interpretation of Hallquist’s brusque 2-sentence summary of the 130-page report. Hallquist’s sentences need not be interpreted this way — “slicing and scanning” the brain could be done simply to map neural network patterns rather than to capture the epigenetic state of every nucleus.

Next, Myers objects to the idea that brain emulations could operate at faster-than-human speeds. He responds to a passage in “Intelligence Explosion: Evidence and Import”, another paper cited in the Hallquist post which claims, “Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly.” To this, Myers says:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed… how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

At first read, it almost seems in this objection as if Prof. Myers does not understand the concept that software can be run faster if it is running on a faster computer. After reading this post carefully, it doesn’t seem as if this is what he actually means, but since the connotation is there, the point is worth addressing directly.

Software is a series of electric signals passing through logic gates on computers. The software is agnostic to the processing speed of the underlying computer. The software is a pattern of electrons. The pattern is there whether the clock speed of the processor is 2 kHz or 2 GHz. When and if software is ported from a 2 kHz computer to a 2 GHz computer, it does not stand up and object to this “tweaking the clock speed”. No “waving of hands” is required. The software may very well be unable to detect that the substrate has changed. Even if it can detect the change, it will have no impact on its functioning unless the programmers especially write code that makes it react.

Speed change in software is allowed. If the hardware can support the speed change, pressing a button is all it takes to speed the software up. This is a simple point.

The crux of Myers’ objection seems to actually be about the interaction of the simulation with the environment. This objection makes much more sense. In the comments, Carl Shulman responds to Myers’ objection:

This seems to assume, contrary to the authors, running a brain model at increased speeds while connected to real-time inputs. For a brain model connected to inputs from a virtual environment, the model and the environment can be sped up by the same factor: running the exact same programs (brain model and environment) on a faster (serial speed) computer gets the same results faster. While real-time interaction with the outside would not be practicable at such speedup, the accelerated models could still exchange text, audio, and video files (and view them at high speed-up) with slower minds.

Here, there seems to be a simple misunderstanding on Myers’ part, where he is assuming that Whole Brain Emulations would have to be directly connected to real-world environments rather than virtual environments. The report (and years of informal discussion on WBE among scientists) more or less assumes that interaction with the virtual environment would be the primary stage in which the WBE would operate, with sensory information from an (optional) real-world body layered onto the VR environment as an addendum. As the report describes, “The environment simulator maintains a model of the surrounding environment, responding to actions from the body model and sending back simulated sensory information. This is also the most convenient point of interaction with the outside world. External information can be projected into the environment model, virtual objects with real world affordances can be used to trigger suitable interaction etc.”

It is unlikely that an arbitrary WBE would be running at a speed that lines it up precisely with the 200 Hz firing rate of human neurons, the rate at which we think. More realistically, the emulation is likely to be much slower or much faster than the characteristic human rate, which exists as a tiny sliver in a wide expanse of possible mind-speeds. It would be far more reasonable — and just easier — to run the WBE in a virtual environment with a speed suited to its thinking speed. Otherwise, the WBE would perceive the world around it running at either a glacial pace or a hyper-accelerated one, and have a difficult time making much sense of either.

Since the speed of the environment can be smoothly scaled with the speed of the WBE, the problems that Myers cites with respect to “turn[ing] it up to 11? can be duly avoided. If the mind is turned up to 11, which is perfectly possible given adequate computational resources, then the virtual environment can be turned up to 11 as well. After all, the computational resources required to simulate a detailed virtual environment would pale in comparison to those required to simulate the mind itself. Thus, the mind can be turned up to 11, 12, 13, 14, or far beyond with the push of a button, to whatever level the computing hardware can support. Given the historic progress of computing hardware, this may well eventually be thousands or even millions of times the human rate of thinking. Considering minds that think and innovate a million times faster than us might be somewhat intimidating, but there it is, a direct result of the many intriguing and counterintuitive consequences of physicalism.

Humanity+ Candidate Statement

AppId is over the quota
AppId is over the quota

Humanity+ board elections are coming up soon, and I will be running for reelection.

Here are the projects I will personally pursue if reelected to the board:

1.  Development of a well-cited, academic-quality Emerging Technologies Wiki. A seed of a couple hundred articles would attract serious thinkers in public policy, science, and technology and build sorely needed credibility for Humanity+. I am willing to do most of the writing for this myself, but would appreciate others to share in the task if they are qualified. I’ve already built a list of 200 potential articles.

2.  Set up a mentorship program within Humanity+ where students can connect to established scientists and technologists in fields such as biotech, nanotech, personalized manufacturing, and medicine. Humanity+ has been serving this role informally for many years, and it should be encouraged. We compile a list of interested students who apply, categorize them based on their research focus, and offer these lists to scientists interested in guiding promising young people. This will be a life-changer to brilliant students who don’t feel supported by their current advisors.

3.  Develop material to encourage local futurists salons. Discussion topics, books of interest, how to host salons, when to start a group in your city, make it easier for transhumanists to find others in their area, etc.

4.  Brainstorm possibilities for increasing the profile of transhumanism as a political movement. Investigate hot-button issues that may be leveraged over the next five years to advance our goals without inviting backlash. This would mainly be a thought experiment, evaluating the possibility of action rather than direct action. The International Longevity Alliance is a good model in this area.

5.  Participate in the improvement of H+ magazine by encouraging a transition to themed monthly issues with high quality articles based on what is happening in sci/tech today rather than speculation about the future.

6.  Launch a membership drive by personally contacting those who might be interested in getting involved and sharing our projects with them. Find more ways that Humanity+ can directly benefit our membership.

In general, I take an inclusive stance towards proposals from outside the board, serving as a channel for interested members to get more involved in our core activities.

What I bring to the table:

* 10+ years of experience in transhumanist activism and non-profit management.
* Commitment to increasing the amount of useful activity Humanity+ performs. A magazine and conference is not enough.
* Extensive knowledge base which can be written down and used to attract high-profile members.

If you are a full member of Humanity+, I’d appreciate your support. You can still join, by signing up here.

Rapture versus MechaRapture

Interestingly, RaptureReady.com has a piece by saying:

transhumanist anticipation of the singularity is comparable to the Christian perspective of the second coming of Jesus Christ

For those of you that rejects the singularity as the "myth of the Nerds": are you sure you want to reach agreement with RaptureReady.com? HA!

Read all of it, but only if you're looking for entertainment:

Effect, a movement that believes that his ultimate goal and the light that brings the universe stands in direct opposition to God... their ambition - as Satan - will one day lead to a confrontation purely and simply physical with God himself. It is a battle that God will win.

Gods Cornu, bioethicists Dread

On some Druid website:

Yet die we must. As Sherwin Nuland points out in his book, how to die, we die for the sake of our species; If somehow we found a way to live forever, we would quickly overwhelm the ability of our environment and all perish as lemmings. 'Must', in biological terms, thus carries not only the ordinary meaning of the inevitable, but also a sense of relevance. Our need for death is personified in Herne the Hunter, sometimes called the Celtic Cernunnos. He is the God of slaughter, who takes away life for reasons of health and balance in the world.

Note the uncanny resemblance to an article in the onion wrote a year earlier.

Yahzi Coyote has a saying: "All that is necessary to defeat a theologian must resume its arguments to the producer, by changing the word of God in any other Word." Similarly, all that is necessary to defeat a bio-Ethicist is to imagine his glowing eyes and his voice further each time he mentions death, decay, suffering and need.

Singularity Research Challenge

SIAI (where I'm currently voluntary) is another campaign's matching challenge. This time, you get to choose what to fund specific projects. Michael Nicholas has more details.

Here are some reasons to invest in reducing existential risk that you may have not considered before: "religions disperse, kingdoms are apart, works of science would have invented anyway, but exploits of existential risk reduction remain for all ages.»Stories where the world is saved excitingly necessarily depend on the actual situation is recorded less exciting.To make the world a better place, you must you the world a place.Think of it as extreme survivalism: everyone lives.Even if you believe that armageddon is to come, would not embarrassing if we went off before that happened?Just existential risk reduction means save the whales with margins of extremely broad security around the definition of "whale".If we go extinct, the terrorists can win.

Quantum fauna

David Wallace wrote an article on reductionism, emergence and the worlds in the interpretation of many worlds of quantum mechanics which is informative and accessible as his earlier writings:

Decoherence and ontology (or: how I learned to stop worrying and love FAPP)

In the end, however, that a theory of the world is "intuitive" is without argument
against him, is that it can be properly described in a mathematical language. Our
intuitions about what is "reasonable" or "imaginable" are designed to help our
ancestors on the savannas of Africa and the universe is not required to comply with
for them.

I particularly enjoyed the 2 figure and note 14.

Wallace also has a last update on the program to obtain the probability of decision theory:

A formal proof of the rule of theoretical postulates to the decision

I tend to think the more orthodox interpretation of quantum mechanics of Wallace, it's just that people is not yet known.

Tuesday 29 January 2013

Singularity Summit 2009 in New York

SIAI organizes the annual Singularity Summit on October 3rd and 4th 2009. In contrast to the peaks of 2006-2008, which have been in the Bay area, it will be held in New York.

For people who are interested in the side is USA and Europe, in particular, the le sommet Summit seems a unique opportunity to see speakers from various competencies impressive on the kind of topics this blog talks about. Topics are largely based on the idea of the technological singularity, but look like they will understand, improve cognitive performance, neuroscience, philosophy of mind, nanotechnology and prediction of the future. Many speakers are David Chalmers, Ray Kurzweil, Philip Tetlock and Peter Thiel.

The concept of technological Singularity recently obtained coverage from front page of NYT - evidence that it takes off in the media. However, if you come to the Summit or help spread the word, it is still early enough that you get to tell you have been in this kind of things before being dominant.

Personal experiments: fueled by innovation?

AppId is over the quota
AppId is over the quota

Another way personal experimentation might be worth it for me, yet not used up by those before me: there is so much innovation that there are constantly new things to test, even if people experiment a lot. Beeminder and Workflowy are new. The abilities to prompt yourself to do things with a mobile phone or eat Japanese food or use your computer in a vast number of ways are relatively new.

I doubt this explains much. The question applies to many things that have been around and not that different for a long time, e.g. wheat, motivation, reading, romantic arrangements. And even if Beeminder is new, many of the basic ideas must be old (e.g. ‘don’t break the chain‘). As a society we don’t seem to have a much better idea of the effects of wheat on a person than we do of Beeminder.

Another way innovation could explain the puzzle is if all kinds of innovations change the value of all kinds of ancient things e.g. prevalence of internet use changes the effects of going to bed early or sitting in a certain way or doing something with your hair or knowing a lot of stories. If this is the case, experimentation is worth less than it seems, as the results will soon be out of date. So this goes under the heading ‘I’m wrong: experimentation isn’t worth it’, which would explain the puzzle, except the bit where everyone else perceives this and knows not to bother, and I don’t. I will get back to explanations of this form later.

Ask Questions That Matter

AppId is over the quota
AppId is over the quota

I know a lot of people who think of themselves as intellectuals. That is, they spend a substantial fraction of their free time dealing in ideas. Most of these people are mainly consumers who take in ideas, but don’t seem to do much with them, at least as far as anyone else ever sees. But others are more outward facing, talking and writing about ideas, often quite eagerly.

Oddly however, most of these idea dealers seem to define themselves mostly in terms of the answers they want to promote, instead of the questions they want to answer. Most idea-oriented Facebook status updates seem like this – saying yay for some answer they agree with. The few that deal in questions also seem to be mainly promoting them, saying yay for the sort of people who like that question.

Now yes, in addition to question-answering the world also needs some answer indexing, aggregation, and yes, sometimes even promotion. And yes, sometimes the world needs people to generate and even promote good questions. But my guess is that most intellectual progress comes from people who focus on a question to which they do not currently know the answer, and then try to answer it. Yes, people doing other things sometimes stumble on a new answer, but in general it helps to be looking in order to find.

I also know lots of academics, and they all have one or more research topics. And if you ask them they can usually phrase these topics in terms of questions they want to answer. And this is a big part of what makes academics more intellectually productive. But alas, few academics are able to articulate in much detail why it is important to the world that their questions get answered. They usually just invoke some vague associations, apparently considering it sufficient that some journal is willing to publish their answers. They seem to think it is someone else’s job to decide what questions are important. Unfortunately, most academic journal articles are answering pretty uninteresting questions.

So the important intellectual progress comes down to the rather small fraction of intellectuals who both define their focus in terms of a question, rather than an answer, and who bother to think about what questions actually matter. To these, I salute, and bow. They are the sweet thirst-quenching fount of progress.

A History Of Foom

AppId is over the quota
AppId is over the quota

I had occasion recently to review again the causes of the few known historical cases of sudden permanent increases in capacity growth rates in broadly capable systems: humans, farmers, and industry. For each of these transitions, a large number of changes appeared at roughly the same time. The problem is to distinguish the key change that enabled all the other changes.

For humans, it seems that the most proximate cause of faster human than non-human growth was culture – a strong ability to reliably copy the behavior of others allowed useful behaviors to accumulate via a non-genetic path. A strong ritual ability was clearly key. It also helped to have language, to live in large bands friendly with neighboring bands, to cook and travel widely, etc., but these may not have been essential. Chimps are pretty good at culture compared to most animals, just not good enough to support sustained cultural growth.

For farming, it seems to me that the key was the creation of long range trade routes along which domesticated seeds and animals could move. It was the accumulation of domestication innovations that most fundamentally caused the growth in farmers, and it was these long range trade routes that allowed innovations to accumulate so much faster than they had for foragers.

How did farming enable long range trade? Since farmers stay in one place, they are easier to find, and can make more use of heavy physical capital. Higher density living requires less travel distance for trade. But perhaps most important, transferable domesticated seeds and animals embodied innovations directly, without requiring detailed copying of behavior. They were also useful in a rather wide range of environments.

On industry, the first burst of productivity at the start of the industrial revolution was actually in the farming sector, and had little to do with machines. It appears to have come from ”amateur scientist” farmers doing lots of little local trials about what worked best, and then communicating them to farmers elsewhere who grew similar crops in similar environments, via “scientific society” like journals and meetings. These specialist networks could spread innovations much faster than could trade in seeds and animals.

Applied to machines, specialist networks could spread innovation even faster, because machine functioning depended even less on local context, and because innovations could be embodied directly in machines without the people who used those machines needing to learn them.

So far, it seems that the main causes of growth rate increases were better ways to share innovations. This suggests that when looking for what might cause future increases in growth rates, we also seek better ways to share innovations.

Whole brain emulations might be seen as allowing mental innovations to be moved more easily, by copying entire minds instead of having one mind train or teach another. Prediction and decision markets might also be seen as better ways to share info about which innovations are likely to be useful where. In what other ways might we dramatically increase our ability to share innovations?

Hail Scott Siskind

AppId is over the quota
AppId is over the quota

Scott Siskind gets it:

A democracy provides a Schelling point, … an option which might or might not be the best, but which is not too bad and which everyone agrees on in order to stop fighting. … In the six hundred fifty years between the Norman Conquest and the neutering of the English monarchy, Wikipedia lists about twenty revolts and civil wars. … In the three hundred years since the neutering of the English monarchy and the switch to a more Parliamentary system, there have been exactly zero. … Democracy doesn’t always perform optimally, but it always performs fairly, … and that is enough to prevent people from starting civil wars.

Academia is different. Its state resembles that of pre-democratic governments, when anyone could choose a side, claim it was legitimate, and then get into endless protracted fights with the partisans of other sides. If you believe ObamaCare will destroy the economy, you will have no trouble finding a prestigious academic who agrees with you. Then all you need to do is accuse the other academics of bias, or cherry-picking, or using the wrong statistical test, or any of the other ways to discredit scientists you don’t like. …

A democratic vote among the scientific establishment is insufficient to settle these topics. The most important problem is that it gives massive power to the people who determine who gets to be part of “the scientific establishment”. … So not having any Schelling point – being hopelessly confused about the legitimacy of academic ideas – sucks. But a straight democratic vote of academics would also suck and be potentially unfair.

Prediction markets avoid these problems. There is no question of who the experts are: anyone can invest in a prediction market. There’s no question of special interests taking it over; this just distributes free money to more honest investors. Not only do they escape real bias, but more importantly they escape perceived bias. It is breathtakingly beautiful how impossible it is to rail that a prediction market is the tool of the liberal media or whatever. …

Nate Silver might do better than a prediction market, I don’t know. But Nate Silver is not a Schelling point. Nobody chose him as Official Statistics Guy via a fair process. And if someone objected to his beliefs, they could accuse him of bias and he would have no recourse until it was too late. If a prediction market is almost as good as Nate, and it is also unbiased and impossible to accuse of bias, we have our Schelling point. …

Just as democracy made it harder to fight over leadership, prediction markets make it harder to fight over beliefs. We can still fight over values, of course – if you hate teenagers having sex, and I don’t care about it, we can debate that all day long. But if we want to know whether a certain law will raise the pregnancy rate, there will be only one correct answer, and it will only be a mouse-click away.

I think this would have more positive effects than anyone anticipates. If people took it seriously, not only would the gun control debate be over in an hour, but it would end on the objectively right side, whichever side that was. If single-payer would be better than Obamacare, we could implement single-payer and anyone who tried to make up horror stories about how it would destroy health care would be laughed out of the room. And once these issues have gone away, maybe we can reach the point where half the country stops hating the other half because of disagreements which are largely over factual issues. (more; HT Stephen Bachelor)

By the way, my futarchy paper will appear this year in Journal of Political Philosophy. This is very close to the final version.