New clues on how the brain works.
Good review article discusses latest work on brain patterning from recent
Dec 13th New Scientist
I see some interesting implications for optimizing learning and training.
Walter Derzko
Director Idea Lab
Toronto
wderzko at pathcom.com
(416) 588-1122
==============================
[Archive: 13 December 1997]
Wild Minds
The idea of the brain as a computer
has thoroughly seduced us. But, says John McCrone,
the old grey matter may be just too sloppy
for such a neat metaphor
STANDING by a pond at London Zoo, grabbing a moment to talk
shop with an American colleague on what was supposed to be a
family outing, Karl Friston tried to describe a new vision of the
brain. Traditional thinking held that the brain was some kind of
computer, crunching its way through billions of inputs each second,
outputting consciousness. But said Friston, a theoretical
neurobiologist at London's Institute of Neurology, it is more as if
the arrival of those inputs provokes a widespread disturbance in the
brain.
Look, Friston told his Harvard friend, the brain is like this pond.
You throw in a pebble--the sensory input--and you get ripples.
That's the neurons responding. Sure, the pattern says something
about the way the pebble hit the surface. But the pond is already
covered in ripples caused by other pebbles, so the pattern appears a
little chaotic. And then once the ripples spread out far enough to
begin bouncing off the sides, he continued, the shape of the pond
begins to affect what is going on. The whole thing keeps evolving
and becoming more complex.
Yes, replied his friend, nodding furiously, and as we throw more
and more pebbles--or rather experiences--into the pond, we change
the kind of patterns it produces, and even the shape of the pond
itself. This system has a memory!
In the early 90s, in hundreds of private conversations like this,
mind
scientists were groping their way towards a fresh view of the
brain--one based on the idea that mental states are dynamically
evolved rather than clinically computed. Back then, the arguments
were little more than hand-waving exercises. People were familiar
with the new ideas about chaos, complexity and nonlinear systems
coming out of places like the Santa Fe Institute, but unsure how
they applied to the brain. Today, however, the dynamic revolution
is beginning to roll. At workshops and meetings around the world,
researchers like Friston are talking publicly about dynamic models
of the brain, and the evidence to support the new theories that is
beginning to fall into place.
A replacement for the brain-as-computer model certainly seems
overdue. The textbook view has been that brain cells are simple
logic gates, adding and subtracting input spikes until some
threshold
level of charge is breached, at which point they convulse to produce
a spike of their own. The all-or-nothing nature of a cell's firing
promised to lift neurons clear of the usual soupy sloppiness of
cellular processes, allowing the brain to carry out digitally crisp,
noise-free calculations.
The task for researchers was simply to discover how the output of
each cell encoded a message. In a chase likened to the hunt to crack
the genetic code, neuroscientists became obsessed with finding the
"neural code". They tried to discover whether the message was
contained in the strength of a spike, the average number of spikes
produced each second, or in the timing of the firing, with
information carried only on those spikes which were synchronised
with spikes from other cells (see "Dot dot dot, dash dash dash",
New Scientist, 18 May 1996, p 40).
But the neurons have proved slippery customers. "For 30 years
we've been going along quite nicely, with lots of expensive
equipment, lots of expensive people and lots of papers being
produced, but finally the answers aren't there. We can't even say
what it is about the spike train of an individual neuron that
counts,"
says Rodney Douglas of the Institute of Neuroinformatics in Zurich.
Much worse for the idea of a simple, crackable neural code are the
smattering of recent findings which show that the output of any
individual neuron also depends on what the brain happens to be
thinking at the time. It's as if rather than the spikes combining to
produce conscious awareness, consciousness is able to decide how
the cells should spike.
The search for the neural code began in earnest in the 1960s with
David Hubel and Torsten Wiesel's Nobel prizewinning
demonstration that certain cells in the primary visual cortex--the
first part of the higher brain to receive sensory input from the
eyes--fired only in response to the sight of a line or edge, indeed,
only to a line of the correct slope. The neurons represented every
possible orientation of a line at each point of the visual field,
and
were lined up in the brain like dots on a TV screen, creating a
physical "map" of the input from the eye.
Other researchers soon showed that cells in different areas of the
sensory cortex made maps of the frequencies of sound, and even,
in the case of touch, of the contours of the body. In fact, the
entire
wrinkled surface of the cortex seemed to be a mosaic of mapping,
with the primary sensory areas being the first rung of a hierarchy
of
processing. The primary maps were reworked, as the message from
one layer of cells, supposedly encoded in the neurons' spike trains,
fed into the next. So, for example, about halfway up the visual
hierarchy, cells might fire in response to movement in a certain
direction and with a certain speed, or a certain shape of a certain
colour. Sensory qualities began to emerge. At the very top of the
hierarchy, neurons would react only to complete objects--say, the
sight of a face or a hand. Each rung of the hierarchy was built on
the digital clarity of the spike pattern of the neurons below,
providing a way for the brain to compute a precise, conscious
representation of the real world. That, at least, was the theory.
The problem was that most of the evidence for the theory came
from studies of anaesthetised animals whose heads had been
propped up in front of screens with their eyelids pinned back.
When, in the late 1980s, researchers developed techniques that
made it easier to record neural impulses from awake animals, the
story of brain cells as simple switches, hard-wired to respond to
this
line or that movement, changed dramatically.
Take an experiment reported in Nature last year by neuroscientists
John Maunsell, at Baylor College of Medicine in Houston, Texas,
and Stefan Treue of the University of Tübingen in Germany. They
studied those neurons about halfway up the visual hierarchy that
deal with motion, in monkeys trained to watch moving dots on a
screen. When the monkeys did not have to follow any dot in
particular, the motion cells simply burst into life each time they
spotted a dot heading in their preferred direction. But as soon as
the
monkeys were asked to concentrate on a single dot--they had been
trained to do this without moving their heads or their eyes--the
cells
became picky. When the target dot came into view, the cells went
wild, doubling their firing rate, while the response from the same
neurons to non-target dots moving in the correct direction became
weaker.
It all makes good psychological sense. The cells turn the volume up
in response to movement that is the focus of attention, and mute it
in response to other movement. But it also raises the question of
how the brain's mental state is managing to transmogrify the cell's
spike pattern.
Spooky
Neuroscientists dread any hint that something spooky might be
going on. They try to slide past the problem of the brain's mental
state interfering with the clarity of the long-sought neural code
with
euphemisms such as "selective attention effects" or
"state-dependent modulations".
Yet Maunsell admits that his findings strike to the heart of the
idea
that the brain works as an input-driven machine: "We are coming to
the end of one generation of effort," he predicts. "The next
generation is going to have to look at the whole system [and]
understand the effect that plans, decisions and actions can have on
what neurons do."
Maunsell and Treue are not the only ones who have been backed
into a corner by their own data. A rash of similar findings is
emerging from labs run by the likes of Robert Desimone at the
National Institute of Mental Health near Washington DC and
Richard Andersen at Caltech in Pasadena. One team of researchers
has even found that cells right at the bottom of the visual
hierarchy--those that take the "freshest" input from the eyes and
might be expected to be least influenced by the brain's mental
state--are also at its mercy.
David Leopold and Nikos Logothetis, both also at Baylor, reported
in Nature last year the results of an experiment in which monkeys
looked through stereoscopic displays so that each eye saw a
different image--gratings angled in different directions. The brain
makes sense of such a conflict by allowing the view of one eye to
dominate: the monkey is consciously aware of seeing only a single
image.
According to the old view of the brain, the cortex cells that get
their
input direct from the eyes shouldn't be involved in the mental
jiggery-pokery that suppresses the image from one eye--it should
happen higher up the hierarchy. Instead, Leopold and Logothetis
found that the firing of about a fifth of cells in the primary
visual
cortex depended on which image the monkeys signalled they were
seeing. Even at the lowest level, there was an attention effect.
Booming with the enthusiasm of an outsider who is beginning to be
proved right, Scott Kelso, a dynamicist who studies the brain and
behaviour at Florida Atlantic University in Boca Raton, claims that
results like these will only make sense once the old notion of the
brain processing encoded messages through nothing more than a
hierarchy of inputs and outputs is abandoned. Instead, he says,
neuroscience must make a fresh start and recognise that the brain is
a dynamical system--an organ that evolves its patterns of activity
rather than computes them.
The very word "dynamic" strikes fear into the hearts of many
researchers, relying as it does on the maths of chaos and complexity
theory. Jargon such as "metastability", "critical boundaries" and
"loosely coupled attractors" litters the papers. Still, the
champions
of the dynamic view stress that a few simple ideas are key.
Bursting forth
First, says Kelso, stop thinking of neurons as if they are
exchanging
messages. Instead (to use another of the hydraulic metaphors
favoured by dynamicists), the spike patterns of a cell are like a
whorl erupting in moving water--a local expression of a much wider
balance of forces. After all, it is no secret that most of the 5000
input lines to the average brain cell are actually parts of feedback
loops returning via neighbouring neurons, or those higher up the
hierarchy. Barely a tenth of the connections come from sense
organs or mapping levels lower in the hierarchy. Every neuron is
plumbed into a sea of feedback. The signals coming up the chain
may provide the seed of a response, but in the end, the cell's spike
patterns evolve in concert with how the rest of the brain is
reacting
to the stimulus. The spike pattern is less a crisp code and more the
chatterings of a system forever moving towards an equilibrium.
This is good, as it means there is nothing spooky about how
thoughts and intentions, that is mental states, shape the activity
of a
neuron, and vice versa. But it does mean that levels of
consciousness matter, especially if you are trying to make sense of
a neuron's spike train.
When a cell is firing in relative isolation--for example, when an
animal is unconscious--its response will be at its most hard-wired,
a
simple sum of its sensory or lower inputs. Like a ringing phone, the
neuron will announce that it has a message, but no one lifts the
receiver to get the conversation going. But as the experiments with
wide-awake monkeys show, as soon as a cell becomes drawn into
some greater wave of processing, its firing appears far less
hard-wired. Of course, it takes time for the wave to build up, which
is why attention effects usually show up about a tenth of a second
behind the first exposure to the focus of the attention.
The second crucial change needed in the thinking about neural
processing, say the dynamicists, is to realise that the brain is
always
in a state of tension, its circuits drawn tight like the surface of
Friston's pond. Computer analogies suggest that the brain is a blank
screen until cells fire to light up a picture. But almost every
brain
cell is constantly firing, a fact that has long troubled
neuroscientists.
There is a steady tick-over of at least three or four spikes a
second
even in an area of the brain that seems to be doing nothing. The
temptation is to dismiss this activity as meaningless, just a
leakage
of current. But dynamicists say the spikes bouncing around the
brain's connections must be maintaining it at a certain level of
tone,
giving each new input something to disturb in the first place.
Going a step further, they argue, this background firing presumably
creates some meaning. But what? The brain stores memories as
patterns of connections between cells--new experiences prompt the
strengthening of old connections, or the growth of new ones. The
tick-over firing echoing around the brain could be a defocused
representation of everything you have ever learnt or known. When
the brain processes new information, it is not a matter of lighting
up
dark circuits but of driving a generalised, weakly defined state of
representation towards a specific one. The brain is always on, it
just
needs tuning in.
Hot spots
As the message of the dynamicists begins to sink in, neuroscientists
are having to think again about the way they do experiments and
analyse their data. The most obvious change, says Friston, is that
researchers must allow enough time to get an accurate fix on what a
cell is up to. Indeed, to truly understand a cell's firing pattern,
you
need to know how far along its feedback trajectory it has gone. At
the moment, neuroscientists tend to concentrate on a cell's first
reaction to a stimulus rather than waiting another tenth of a second
or so until the feedback has had long enough to focus what the cell
is saying.
What's more, in a dynamic scheme, cells apparently saying nothing
(that show no change in firing rate and therefore go unreported
when the time comes to write up a research paper) are still
important. "Rather than talking about a hunt for the neural code, we
should be talking about a hunt for the metric--the right kind of
spatiotemporal measure to give the full picture of how a cell's
response evolves," Friston says.
Friston is as good as his word when it comes to his own interest in
human brain scanning. The standard approach to scanner studies is
to look for brain hot spots, the bits of the brain that have to work
the hardest when a subject does some mental or physical task. Like
20th-century phrenologists, researchers look for the brain bump that
"does" hand movements or mental imagery. But if the brain really
works by evolving patterns of connections, then it is how areas of
the brain, even those that appear quite faint on a scan, join
together
over time that tells the true story.
As part of a two-day symposium on dynamical neuroscience at this
October's meeting of the Society for Neuroscience in New Orleans,
Friston attempted to prove that the distinction between mapping
brain hot spots and patterns of connections is not purely academic.
He reported an analysis of brain scan data collected by
magnetoencephalography, using a method of correlation that
highlights increases and decreases in activity in different parts of
the
brain that occur over the same period. It turned out that high
activity in an area at the front of the brain called the prefrontal
cortex, and low activity in an area towards the back called the
parietal cortex, are tightly coupled just when the volunteer is
deciding to make small hand movements. Usual methods of analysis
would have missed the link. What's more, says Friston, it took a
twentieth of a second or more for this coupling to appear, a clear
sign that the connection had to evolve.
For now, however, dynamicists like Friston and Kelso are keeping a
sense of perspective. They know that convincing mainstream
neurobiologists to stop looking for machine-like order in a
biological
organ that thrives on the creative energy of chaos and feedback is
going to take more than a few experiments and lots of enthusiasm.
As Kelso said after the New Orleans symposium: "If we are serious
about the brain as a self-organising system, then we need new tools,
new concepts, a new language. Even the way we measure the brain
has to be different." That process has started, and once it is
complete, the dynamicists say, neuroscience's golden age of
discovery will be ready to begin.