Wednesday, 30 January 2013

Comprehensive Copying Not Required for Uploading

AppId is over the quota
AppId is over the quota

Recently, there was some confusion by biologist P.Z. Myers regarding the Whole Brain Emulation Roadmap report of Anders Sandberg and Nick Bostrom at the Future of Humanity Institute.

The confusion arose when Prof. Myers made incorrect assumptions about the 130-page roadmap from reading a 2-page blog post by Chris Hallquist. Hallquist wrote:

The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.

If this process manages to give you a sufficiently accurate simulation

Prof. Myers objected vociferously, writing, “It won’t. It can’t.”, subsequently launching into a reasonable attack against the notion of scanning a living human brain at nanoscale resolution with current fixation technology. The confusion is that Prof. Myers is criticizing a highly specific idea, the notion of exhaustively simulating every axon and dendrite in a live brain, as if that were the only proposal or even the central proposal forwarded by Sandberg and Bostrom. In fact, on page 13 of the report, the authors present a table that includes 11 progressively more detailed “levels of emulation”, ranging from simulating the brain using high-level representational “computational modules” to simulating the quantum behavior of individual molecules. In his post, Myers writes as if the 5th level of detail, simulating all axons and dendrites, is the only path to whole brain emulation (WBE) proposed in the report (it isn’t), and also as if the authors are proposing that WBE of the human brain is possible with present-day fixation techniques (they aren’t).

In fact, the report presents Whole Brain Emulation as a technological goal with a wide range of possible routes to its achievement. The narrow method that Myers criticizes is only one approach among many, and not one that I would think is particularly likely to work. In the comments section, Myers concurs that another approach to WBE could work perfectly well:

This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.

But, the report does not mandate that a “brute force dismantling and reconstruction of every cell in the brain” is the only way forward for uploading. This makes it look as if Myers did not read the report, even though he claims, “I read the paper”.

Slicing and scanning a brain will be necessary but by no means sufficient to create a high-detail Whole Brain Emulation. Surely, it is difficult to imagine how the salient features of a brain could be captured without scanning it in some way.

What Myers seems to be objecting to is a kind of dogmatic reductionism, “brain in, emulation out” direct scanning approach that is not actually being advocated by the authors of the report. The report is non-dogmatic, writing that a two-phase approach to WBE is required, where “The first phase consists of developing the basic capabilities and settling key research questions that determine the feasibility, required level of detail and optimal techniques. This phase mainly involves partial scans, simulations and integration of the research modalities.” In this first phase, there is ample room for figuring out what the tissue actually does. Then, that data can be used simplify the scanning and representation process. The required level of understanding vs. blind scan-and-simulate is up for debate, but few would claim that our current neuroscientific level of understanding suffices.

Describing the difficulties of comprehensive scanning, Myers writes:

And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue?

Measuring the epigenetic state of every nucleus is not likely to be required to create convincing, useful, and self-aware Whole Brain Emulations. No neuroscientist familiar with the idea has ever claimed this. The report does not claim this, either. Myers seems to be inferring this claim himself through his interpretation of Hallquist’s brusque 2-sentence summary of the 130-page report. Hallquist’s sentences need not be interpreted this way — “slicing and scanning” the brain could be done simply to map neural network patterns rather than to capture the epigenetic state of every nucleus.

Next, Myers objects to the idea that brain emulations could operate at faster-than-human speeds. He responds to a passage in “Intelligence Explosion: Evidence and Import”, another paper cited in the Hallquist post which claims, “Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly.” To this, Myers says:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed… how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

At first read, it almost seems in this objection as if Prof. Myers does not understand the concept that software can be run faster if it is running on a faster computer. After reading this post carefully, it doesn’t seem as if this is what he actually means, but since the connotation is there, the point is worth addressing directly.

Software is a series of electric signals passing through logic gates on computers. The software is agnostic to the processing speed of the underlying computer. The software is a pattern of electrons. The pattern is there whether the clock speed of the processor is 2 kHz or 2 GHz. When and if software is ported from a 2 kHz computer to a 2 GHz computer, it does not stand up and object to this “tweaking the clock speed”. No “waving of hands” is required. The software may very well be unable to detect that the substrate has changed. Even if it can detect the change, it will have no impact on its functioning unless the programmers especially write code that makes it react.

Speed change in software is allowed. If the hardware can support the speed change, pressing a button is all it takes to speed the software up. This is a simple point.

The crux of Myers’ objection seems to actually be about the interaction of the simulation with the environment. This objection makes much more sense. In the comments, Carl Shulman responds to Myers’ objection:

This seems to assume, contrary to the authors, running a brain model at increased speeds while connected to real-time inputs. For a brain model connected to inputs from a virtual environment, the model and the environment can be sped up by the same factor: running the exact same programs (brain model and environment) on a faster (serial speed) computer gets the same results faster. While real-time interaction with the outside would not be practicable at such speedup, the accelerated models could still exchange text, audio, and video files (and view them at high speed-up) with slower minds.

Here, there seems to be a simple misunderstanding on Myers’ part, where he is assuming that Whole Brain Emulations would have to be directly connected to real-world environments rather than virtual environments. The report (and years of informal discussion on WBE among scientists) more or less assumes that interaction with the virtual environment would be the primary stage in which the WBE would operate, with sensory information from an (optional) real-world body layered onto the VR environment as an addendum. As the report describes, “The environment simulator maintains a model of the surrounding environment, responding to actions from the body model and sending back simulated sensory information. This is also the most convenient point of interaction with the outside world. External information can be projected into the environment model, virtual objects with real world affordances can be used to trigger suitable interaction etc.”

It is unlikely that an arbitrary WBE would be running at a speed that lines it up precisely with the 200 Hz firing rate of human neurons, the rate at which we think. More realistically, the emulation is likely to be much slower or much faster than the characteristic human rate, which exists as a tiny sliver in a wide expanse of possible mind-speeds. It would be far more reasonable — and just easier — to run the WBE in a virtual environment with a speed suited to its thinking speed. Otherwise, the WBE would perceive the world around it running at either a glacial pace or a hyper-accelerated one, and have a difficult time making much sense of either.

Since the speed of the environment can be smoothly scaled with the speed of the WBE, the problems that Myers cites with respect to “turn[ing] it up to 11? can be duly avoided. If the mind is turned up to 11, which is perfectly possible given adequate computational resources, then the virtual environment can be turned up to 11 as well. After all, the computational resources required to simulate a detailed virtual environment would pale in comparison to those required to simulate the mind itself. Thus, the mind can be turned up to 11, 12, 13, 14, or far beyond with the push of a button, to whatever level the computing hardware can support. Given the historic progress of computing hardware, this may well eventually be thousands or even millions of times the human rate of thinking. Considering minds that think and innovate a million times faster than us might be somewhat intimidating, but there it is, a direct result of the many intriguing and counterintuitive consequences of physicalism.

No comments:

Post a Comment