IUBio Biosequences .. Software .. Molbio soft .. Network News .. FTP

No subject


Sun Apr 10 21:25:34 EST 2005


Here are some thoughts:

One general point is that I'm not entirely sure about what is meant 
by the
global/local distinction. Certainly action at a distance can't take 
place;
something physical happens to the cell/connection in question in 
order for
it to change. As I understand it, the prototypical local learning 
is a
Hebbian rule, where all the information specifying plasticity is in 
the
pre and post-synaptic cells (ie "local" to the connection), while a 
global
learning rule is mediated by something distal to the cell in 
question
(i.e. a neuromodulatory signal). But of course the signal must 
contact the
actual cell via diffusion of a chemical substance (e.g. dopamine). 
So one
different distinction might be how specific the signal is; i.e. in 
a local
rule like LTP the information acts only on the single connection, 
while a
modulatory signal could change all the connections in an area by a 
similar
amount. However, the effects of a neuromodulator could in turn be
modulated by the current state of the connection - hence a global 
signal
might act very differently at each connection. Which would make the 
global
signal seem local. So I'm not sure the distinction is clearcut. 
Maybe its
better to consider a continuum of physical distance of the signal 
to
change and specificity of the signal at individual connections. 

A couple of specific comments follow:

> A) Does plasticity imply local learning? 
> 
> The physical changes that are observed in synapses/cells in 
> experimental neuroscience when some kind of external stimuli is 
> applied to the cells may not result at all from any  specific 
> "learning" at the cells. The cells might simply be responding to 
a 
> "signal to change" - that is, to change by a specific amount in a 
> specific direction. In animal brains, it is possible that the 
> "actual" learning  occurs in some other part(s) of the brain, say 
> perhaps by a global learning mechanism. This global mechanism can 
> then send "change signals" to the various cells it is using to 
> learn a specific task. So it is possible that in these 
neuroscience 
> experiments, the external stimuli generates signals for change 
> similar to those of a global learning agent in the brain and that 
> the changes are not due to "learning" at the cells themselves. 
> Please note that scientific facts/phenomenon like LTP/LTD or 
> synaptic plasticity can probably be explained equally well by 
many 
> theories of learning (e.g. local learning vs. global learning, 
> etc.). However, the correctness of an explanation would have to 
be 

I think it would be difficult to explain the actual phenomenon of 
LTP/LTD 
as a response to some signal sent by a different part of the brain, 
since 
a good amount of the evidence comes from in vitro work. So clearly 
the 
"change signals" can't be coming from some distant part of the 
brain - 
unless the slices contain the necessary machinery for generating 
the 
change signal. Also, its of course possible that LTP/LTD local 
learning 
rules act in concert with global signals (as you mention below); 
these 
global signals being sent by nonspecific neuromodulators (an idea 
brought up 
plenty of times before). I'm not sure about the differences in the 
LTP/LTD data collected in vivo versus in vitro; I'm sure there are 
people 
out there studying it carefully, and this could provide insight.

> 
> B) "Pure" local learning does not explain a number of other 
> activities that are part of the process of learning!! 
> 
> When learning is to take place by means of "local learning" in a 
> network of cells, the network has to be designed prior to its 
> training. Setting up the net before "local" learning can proceed 
> implies that an external mechanism is involved in this part of 
the 
> learning process. This "design" part of learning precedes actual 
> training or learning by a collection of "local learners" whose 
only 
> knowledge about anything is limited to the local learning law to 
> use!

Of course, changing connection strengths seems to be the last phase 
of 
the "learning/development" process. Correct numbers of cells need 
to be 
generated, they have to get to their correct locations, proper 
connections between subpopulations need to be established and 
refined, 
and only at this point is there a substrate for "local" learning. 
All of 
these can be affected to a certain extent by environment. For 
example, 
the number of cells in the spinal cord innervating a peripheral 
target 
can be downregulated with limb bud ablation; conversely, the final 
number 
can be upregulated with supernumerary limb grafts. Another well 
known 
example is the development of ocular dominance columns. Here, 
physical 
connections can be removed (in normal development), or new 
connections 
can be established (evidence for this from the reverse suture 
experiments), depending on the given environment. What would be 
quite 
interesting would be if all these developmental phases are guided 
by 
similar principles, but acting over different spatial and temporal 
scales, and mediated by different carriers (e.g. chemical versus 
electrical signals). Alas, if only I had a well-articulated, cogent 
principle in hand with which to unify these disparate findings; my 
first 
Nobel prize would be forthcoming. In lieu of this, we're stuck with 
my 


ramblings.

> 
> In order to learn properly and quickly, humans generally collect 
> and store relevant information in their brains and then "think" 
> about it (e.g. what problem features are relevant, problem 
> complexity, etc.). So prior to any "local learning," there must 
be 
> processes in the brain that examine this "body of  
> information/facts" about a problem in order to design the 
> appropriate network that would fit the problem complexity, select 
> the problem features that are meaningful, etc. It would be very 
> difficult to answer the questions "What size net?" and "What 
> features to use?" without looking at the problem in great detail. 
A 
> bunch of "pure" local learners, armed with their local learning 
> laws, would have no clue to these issues of net design, 
> generalization and feature selection.
> 
> So, in the whole, there are a "number of activities" that need to 
> be 
> performed before any kind of "local learning" can take place. 
These 
> aforementioned learning activities "cannot" be performed by a 
> collection of "local learning" cells! There is more to the 
process 
> of learning than simple local learning by individual cells. Many 
> learning "decisions/tasks" must precede actual training by "local 
> learners." A group of independent "local learners" simply cannot 
> start learning and be able to reproduce the learning 
> characteristics and processes of an "autonomous system" like the 
> brain.
> 
> Local learning, however, is still a feasible idea, but only 
within 
> a general global learning context. A global learning mechanism 
> would be the one that "guides" and "exploits" these local 
learners. 
> However, it is also possible that the global mechanism actually 
> does all of the computations (learning) and "simply sends 
signals" 
> to the network cells for appropriate synaptic adjustment. Both of 
> these possibilities seem logical: (a) a "pure" global mechanism 
> that learns by itself and then sends signals to the cells to 
> adjust, or (b) a global/local combination where the global 
> mechanism performs certain tasks and then uses the local 
mechanism 
> for training/learning. 
> 
> Note that the global learning mechanism may actually be 
implemented 
> with a collection of local learners!!
> 

Notwithstanding the last remark, the above paragraphs perhaps run 
the 
risk of positing a little global homunculus that "does all the 
computations" and simply "sends signals" to the cells. I might be 
confused by the distinction between local and global 
learning. All we have to work with are cells that change their 
properties 
based on signals impinging upon them, be they chemical or 
electrical and 
originating near or far from the synapse, so 
it seems that a "global" learning mechanism *must* be implemented 
by 
local learners. (Again, if by local you specifically mean LTP/LTD 
or 
something similar, then I agree - other mechanisms are also at 
work).


> The basic argument being made here is that there are many tasks 
in 
> a "learning process" and that a set of "local learners" armed 
with 
> their local learning laws is incapable of performing all of those 
> tasks. So local learning can only exist in the context of global 
> learning and thus is only "a part" of the total learning process. 
> 
> It will be much easier to develop a consistent learning theory 
> using the global/local idea.  The global/local idea perhaps will 
> also give us a better handle on the processes that we call 
> "developmental" and "evolutionary." 

One last comment. I'm not sure that the "developmental" vs. 
"learning" distinction is meaningful, either (I'm not hacking on 
your 
statements above, Asim; I think this distinction is more or less a 
tacit 
assumption in pretty much all neuroscience research). I read these 
as 
roughly equivalent to "nature vs. nurture" or "genetics vs. 
environment". 
I would claim that to say that any phenomenon is controlled by 
"genetics" 
is a scientifically meaningless statement. The claim that 
such-and-such a 
phenomenon is genetic is the modern equivalent of saying "The thing 
is 
there cause thats how god made it". Genes don't code for behavioral 
or 
physical attributes per se, they are simply a string of DNA which 
code for 
different proteins. Phenotypes can only arise from the genetic 
"code" by 
a complex interaction between cells and signals from their 
environment. 
Now these signals can be generated by events outside the organism 
or 
within the organism, and I would say that the distinction between 
development and learning is better thought of as whether the 
signals for 
change arise wholly within the organism or if the signals at least 
in 
part arise from outside the organism. Any explanation of either 
learning 
or development has to be couched in terms of what the relevant 
signals 
are and how they affect the system in question.

anthony

============================================================
From:   Russell Anderson, Ph.D.
	Smith-Kettlewell Eye Research Institute
	anderson at skivs.ski.org

I read over the replies you received with interest.

1. In regards to Response #1 (j. Faith)

I am not sure how relevant canalization is to your essay, but I 
wrote a paper on the topic a few years back:
  "Learning and Evolution: A Quantitative Genetics Approach"
   J. Theor. Biol. 175:89-101 (1995).
Incidentally, the phenomenon known as "canalization" was described 
much earlier by Baldwin, Osborn, and Morgan (in 1896), and is more 
generally known as the "Baldwin effect" If you're interested, I 
could mail you a copy.

2. I take issue with the analogies used by Brendan McCane.
His analogy of insect colonies is confused or irrelevant:

First, the behavior of insects, for the purpose of this argument, 
does not indicate any individual (local) learning. Hence, the 
analogy is inappropriate.

Second, The "global" learning occuring in the case of insect 
colonies operates at the level of natural selection acting on the 
genes, transmitted by the surviving colonies to new founding 
Queens. In this sense, individual ants are genetically
ballistic ("pure developmental"). The genetics of insect colonies 
are well-studied
in evolutionary biology, and he should be referred to any standard 
text on the topic (Dawkins, Dennett, Wilson, etc.)

The analogy using computer science metaphors is likewise flawed or 
off-the-subject.

=============================================================
From:   Steven M. Kemp                |
	Department of Psychology      | email:  steve_kemp at unc.edu
	Davie Hall, CB# 3270          |
	University of North Carolina  |
	Chapel Hill, NC 27599-3270    |   fax: (919) 962-2537

I do not know if it is quite on point, but Larry Stein at the 
University of
California at Irvine has done fascinating work on a very different 
type of
neural plasiticity called In-Vitro Reinforcement (IVR).  I have 
been
working on neural networks whose learning algorithm is based on his 
data
and theory.  I don't know whether you would call those networks 
"local" or
"global," but they do have the interesting characteristic that all 
the
units in the network receive the same globally distributed binary
reinforcement signal.  That is, feedback is not passed along the
connections, but distributed simultaneously and equally across the 
network
after the fashion of nondirected dopamine release from the ventral
tegmental projections.

In any event, I will forward the guts of a recent proposal we have 
written
here to give you a taste of the issues involved.  I will be happy 
to
provide more information on this research if you are interested.

(Steven Kemp did mail me parts of a recent proposal. It is long, so 
I did not include it in this posting. Feel free to write to him or 
me for a copy of it.)

============================================================
From:	"K. Char" <kchar at elec.gla.ac.uk>

I have few quick comments:

1. The answer to some parts of the  discussions seem  to lie in
the notion  of a *SEQUENCE*. That is: global->local->(final) 
global; clearly
the  initial global is not the same as the final global. Some of 
the discussants
seem to prefer the sequence: local->global. A number of such 
possibilities
exists.

2. The next question is: who dictates the sequence? Is it a global
mechanism or a local mechanism?

3. In the case of the bee, though it had an individual  goal how
was this goal arrived at?

4. In the context of neural networks (artificial or real): who 
dictates the node activation functions, the topology and the 
learning rules? Does every node find its own activation function?

5. Finally how do we form concepts?  Do the concepts evolve as a 
result of local interactions at the neuron  level or through the 
interaction  of micro-concepts at a global level which then trigger 
a local  mechanism?

6. Here the  next question could be: how did these micro-concepts
evolve in the very first place?

7. Is it possible that these  neural structures provide the *very 
motivation*  for the  formation of concepts at the global level in 
order to adapt these structures effectively? If so, does this 
motivation arise from the environment itself?

============================================================
Response # 1:

As you mention, neuroscience tends to equate network plasticity 
with learning. Connectionists tend to do the same. However this 
raises a problem with biological systems because this conflates the 
processes of development and learning. Even the smartest organism 
starts from an 
egg, and develops for its entire lifespan - how do we distinguish 
which 
changes are learnt, and which are due to development. No one would 
argue that we *learn* to have a cortex, for instance, even though 
it is due to 
massive emryological changes in the central nervous system of the 
animal.

This isn't a problem with artificial nets, because they do not 
usually have a true developmental process and so there can be no 
confusion between the two; but it has been a long-standing problem 
in the ethology literature, where learnt changes are contrasted 
with "innate" 
developmental ones. A very interesting recent contribution to this 
debate is 
Andre Ariew's "Innateness and Canalization", in Philosophy of 
Science 63 
(Proceedings), in which he identifies non-learnt changes as being 
due to canalised processes. Canalization was a concept developed by 
the biologist 
Waddington in the 40's to describe how many changes seem to have 
fixed end-goals 
that are robust against changes in the environment.

The relationship between development and learning was also 
thoroughly explored by Vygotsky (see collected works vol 1, pages 
194-210).

I'd like to see what other sorts of responses you get,

Joe Faith <josephf at cogs.susx.ac.uk>
Evolutionary and Adaptive Systems Group,
School of Cognitive and Computing Sciences,
University of Sussex, UK.

=================================================================
Response # 2:

I fully agree with you, that local learning is not the one and only 
ultimate approach - even though it results in very good learning 
for some domains.

I am currently writing a paper on the competitive learning 
paradigm. I am proposing, that this competition that occurs e.g. 
within neurons should be called local competition. The network as a 
whole gives a global common goal to these local competitors and 
thus their competition must be regarded as
cooperation from a more global point of view.

There is a nice paper by Kenton Lynne that integrates the ideas of 
reinforcement and competition. When external evaluations are 
present, they can serve as teaching values, if nor the neurons 
compete locally.

@InProceedings{Lynne88,
  author = 	 {K.J.\ Lynne},
  title = 	 {Competitive Reinforcement Learning},
  booktitle = 	 {Proceedings of the 5th International Conference 
on
                    Machine Learning},
  year = 	 {1988},
  publisher =      {Morgan Kaufmann},
  pages = 	 {188--199}
}
----------------------------------------------------------
Christoph Herrmann                     Visiting researcher
Hokkaido University
Meme Media Laboratory
Kita 13 Nishi 8, Kita-          Tel: +81 - 11 - 706 - 7253
Sapporo 060                     Fax: +81 - 11 - 706 - 7808
Japan                      Email: chris at meme.hokudai.ac.jp
http://aida.intellektik.informatik.th-darmstadt.de/~chris/
=============================================================

Response #3:

I've just read your list of questions on local vs. global learning 
mechanisms.  I think I'm sympathatic to the implications or 
presuppositions of your questions but need to read them more 
carefully later.  Meanwhile, you might find very interesting a 
two-part article on such a mechanism by Peter G. Burton in the 1990 
volume of _Psychobiology_ 18(2).119-161 & 162-194.

Steve Chandler					
<chandler at uidaho.edu>
===============================================================

Response #4:

A few years back, I wrote a review article on issues of local 
versus global learning w.r.t. synaptic plasticity. (Unfortunately, 
it has been "in press" for nearly 4 years). Below is an abstract. I 
can email the paper to you in TeX or 
postscript format, or mail you a copy, if you're interested.

Russell Anderson
------------------------------------------------

"Biased Random-Walk Learning:
A Neurobiological Correlate to Trial-and-Error"
(In press: Progress in Neural Networks)

Russell W. Anderson
Smith-Kettlewell Eye Research Institute
2232 Webster Street
San Francisco, CA  94115
Office: (415) 561-1715
FAX:    (415) 561-1610
anderson at skivs.ski.org

Abstract:
Neural network models offer a theoretical testbed for the study of 
learning at the cellular level. The only experimentally verified 
learning rule, Hebb's rule, is extremely limited in its ability to 
train networks to perform complex tasks.
An identified cellular mechanism responsible for Hebbian-type 
long-term potentiation, the NMDA receptor, is highly versatile.  
Its function and efficacy are modulated by a wide variety of 
compounds and conditions and are likely to be directed by non-local 
phenomena. Furthermore, it has been demonstrated that NMDA 
receptors are not essential for some types of learning. We have 
shown that another neural network learning rule, the chemotaxis 
algorithm, is theoretically much more powerful than Hebb's rule and 
is consistent with experimental data. A biased random-walk in 
synaptic weight space is a learning rule immanent in nervous 
activity and may account for some types of learning -- notably the 
acquisition of skilled movement.

==========================================================
Response #5:

Asim Roy typed ...
> 
> B) "Pure" local learning does not explain a number of other 
> activities that are part of the process of learning!! 
....
> 
> So, in the whole, there are a "number of activities" that need to 
> be 
> performed before any kind of "local learning" can take place. 
These 
> aforementioned learning activities "cannot" be performed by a 
> collection of "local learning" cells! There is more to the 
process 
> of learning than simple local learning by individual cells. Many 
> learning "decisions/tasks" must precede actual training by "local 
> learners." A group of independent "local learners" simply cannot 
> start learning and be able to reproduce the learning 
> characteristics and processes of an "autonomous system" like the 
> brain.

I cannot see how you can prove the above statement (particularly 
the last sentence). Do you have any proof. By analogy, consider 
many insect colonies (bees, ants etc). No-one could claim that one 
of the insects has a global view of what should happen in the 
colony. Each insect has its own purpose and goes about that purpose 
without knowing the global purpose of the colony. Yet an ants nest 
does get built, and the colony does survive. Similarly, it is 
difficult to claim that 
evolution has a master plan, order just seems to develop out of 
chaos. 

I am not claiming that one type of learning (local or global) is 
better than another, but I would like to see some evidence for your 
somewhat outrageous claims.

> Note that the global learning mechanism may actually be 
implemented 
> with a collection of local learners!!

You seem to contradict yourself here. You first say that local 
learning cannot cope with many problems of learning, yet global 
learning can. You then say that global learning can be implemented 
using local learners. This is like saying that you can implement 
things in C, that cannot be implemented in assembly!! It may be 
more convenient to implement it in C (or using global learning), 
but that doesn't make it impossible for assembly.
-------------------------------------------------------------------
Brendan McCane, PhD.                      Email:  
mccane at cs.otago.ac.nz
Comp.Sci. Dept., Otago University,        Phone:  +64 3 479 8588.
Box 56, Dunedin, New Zealand.             There's only one catch - 
Catch 22.
===============================================================

Response #6:

In regards to arguments against global learning:I think no one 
seriously questions this possibility, but think that global 
learning theories are currently
non-verifiable/ non-falsifyable. Part of the point of my paper was 
that there ARE ways to investigate non-local learning, but it 
requires changes
in current experimental protocols.

Anyway, good luck. I look forward to seeing your compilation.

Russell Anderson
2415 College Ave. #33
Berkeley, CA  94704
==============================================================

Response #7:

	I am sorry that it has taken so long for me to reply to 
your inquiry about plasticity and local/global learning.  As I 
mentioned in my first note to you, I am sympathetic to the view 
that learning involves some sort of overarching, global mechanism 
even though the 
actual information storage may consist of distributed patterns of 
local information.  Because I am sympathetic to such a view, it 
makes it 
very difficult for me to try to imagine and anticipate the problems 
for such views.  That's why I am glad to see that you are 
explicitly trying to find people to point out possible problems; we 
need the 
reality check.
	The Peter Burton articles that I have sent you describes 
exactly the kind of mechanism implied by your first question: Does 
plasticity imply local learning?  Burton describes a neurological 
mechanism by which local learning could emerge from a global 
signal. Essentially he posits that whenever the new perceptual 
input being 
attended to at any given moment differs sufficiently from the 
record of previously 
recorded experiences to which that new input is being compared, the 
difference triggers a global "proceed-to-store" signal.  This 
signal creates a neural "snapshot" (my term, not Burton's) of the 
cortical activations at that moment, a global episodic memory 
(subject to stimulus sampling effects, etc.).  Burton goes on to 
describe how discrete episodic memories could become associated 
with one another so as to give rise to schematic representations of 
percepts (personally I don't think that positing this abstraction 
step is necessary, but Burton does it).
	As neuroscientists sometimes note, while it is widely 
assumed that LTP/LTD are local learning mechanisms, the direct 
evidence for such a hypothesis is pretty slim at best.  Of course 
of of the most serious problems with that view is that the changes 
don't last very long and thus are not really good candidates for 
long term (i.e., life long) memory. Now, to my mind, one of the 
most important 
possibilities overlooked in LTP studies (inherently so in all in 
vitro 
preparations and so far as I know--which is not very far because 
this is not my 
field--in the in vivo preparations that I have read about) is that 
LTP/D is either 
an artifact of the experiment or some sort of short term change 
which requires a global signal to become consolidated into a long 
term record.  
Burton describes one such possible mechanism.
	Another motivation for some sort of global mechanism comes 
from the so-called 'binding problem' addressed especially by the 
Damasio's, but others too.  Somehow somewhere all the distributed 
pieces of information about what an orange is, for example, have to 
be tied together.  A number of studies of different sorts have 
demonstarted repeatedly that such information is distributed 
throughout cortical areas.
	Burton distinguishes between "perceptual learning" 
requiring no external teacher (either locally or globally) and 
"conceptual learning", which may require the assistance of a 
'teacher'.  In his model though, both types of learning are 
activated by global "proceed-to-learn" signals triggered in turn by 
the global 
summation of local disparities between remembered episodes and 
current input.
	I'll just mention in closing that I am particularly 
interested in the empirical adequacy of neuropsychological accounts 
such as Burton's because I am very interested in "instance-based" 
or "exemplar-based" models of learning.  In particular, Royal 
Skousen's _Analogical Modeling of Language_ (Kluwer, 1989) 
describes an explicit, mathematical model for predicting new 
behavior on analogy to instances stored in long term memory.  
Burton's model suggests a possible neurological basis for such 
behavior.

==============================================================
Response #8:

*******************************************************************
	 Fred Wolf                      E-Mail: 
fred at chaos.uni-frankfurt.de
    Institut fuer Theor. Physik 
      Robert-Mayer-Str. 8               Tel:     069/798-23674
    D-60 054 Frankfurt/Main 11          Fax: (49) 69/798-28354
	    Germany
*******************************************************************

could you please point me to a few neuroBIOLOGICAL references that 
justify your claim that
>
> A predominant belief in neuroscience is that synaptic plasticity
> and LTP/LTD imply local learning (in your sens).
>

I think many people appreciate that real learning implies the 
concerted interplay of a lot of different brain systems and should 
not even be attempted to be explained by "isolated local learners". 
See e.g. the series of review-papers on memory in a recent volume 
of PNAS 93 (1996) (http://www.pnas.org/).

Good luck with your general theory of global/local learning.

best wishes 
Fred Wolf
==============================================================

Response #9:

I am into neurocomputing for several years. I read your arguments 
with interest. They certainly deserve further attention. Perhaps 
some combination of global-local learning agents would be the right 
choice.

- Vassilis G. Kaburlasos
Aristotle University of Thessaloniki, Greece

==============================================================
===============================================================

Original Memo:

A predominant belief in neuroscience is that synaptic plasticity 
and LTP/LTD imply local learning. It is a possibility, but it is 
not the only possibility. Here are some thoughts on some of the 
other possibilities (e.g. global learning mechanisms or a 
combination of global/local mechanisms) and some discussion on the 
problems associated with "pure" local learning. 

The local learning idea is a very core idea that drives research in 
a number of different fields. I welcome comments on the questions 
and issues raised here. 

This note is being sent to many listserves. I will collect all of 
the responses from different sources and redistribute them to all 
of the participating listserves. The last such discussion was very 
productive. It has led to the realization by some key researchers 
in the connectionist area that "memoryless" learning perhaps is not 
a very "valid" idea. That recognition by itself will lead to more 
robust and reliable learning algorithms in the future. Perhaps a 
more active debate on the local learning issue will help us resolve 
this issue too.

A) Does plasticity imply local learning? 

The physical changes that are observed in synapses/cells in 
experimental neuroscience when some kind of external stimuli is 
applied to the cells may not result at all from any  specific 
"learning" at the cells. The cells might simply be responding to a 
"signal to change" - that is, to change by a specific amount in a 
specific direction. In animal brains, it is possible that the 
"actual" learning  occurs in some other part(s) of the brain, say 
perhaps by a global learning mechanism. This global mechanism can 
then send "change signals" to the various cells it is using to 
learn a specific task. So it is possible that in these neuroscience 
experiments, the external stimuli generates signals for change 
similar to those of a global learning agent in the brain and that 
the changes are not due to "learning" at the cells themselves. 

Please note that scientific facts and phenomenon like LTP/LTD or 
synaptic plasticity can probably be explained equally well by many 
theories of learning (e.g. local learning vs. global learning, 
etc.). However, the correctness of an explanation would have to be 
judged from its consistency with other behavioral and biological 
facts, not just "one single" biological phenomemon or fact.

B) "Pure" local learning does not explain a number of other 
"activities" that are part of the process of learning!! 

When learning is to take place by means of "local learning" in a 
network of cells, the network has to be designed prior to its 
training. Setting up the net before "local" learning can proceed 
implies that an external mechanism is involved in this part of the 
learning process. This "design" part of learning precedes actual 
training or learning by a collection of "local learners" whose only 
knowledge about anything is limited to the local learning law to 
use! In addition, these "local learners" may have to be told what 
type of local learning law to use, given that a variety of 
different types can be used under different circumstances. Imagine 
who is to "instruct and set up" such local learners which type of 
learning law to use? In addition to these, the "passing" of 
appropriate  information to the appropriate set of cells also has 

to be "coordinated" by some external or global learning mechanism. 
This coordination cannot just happen by itself, like magic. It has 
to be 
directed from some place by some agent or mechanism.

In order to learn properly and quickly, humans generally collect 
and store relevant information in their brains and then "think" 
about it (e.g. what problem features are relevant, complexity of 
the problem, etc.). So prior to any "local learning," there must be 
processes in the brain that "examine" this "body of  
information/facts" about a problem in order to design the 
appropriate network that would fit the problem complexity, select 
the problem features that are meaningful, etc. It would be very 
difficult to answer the questions "What size net?" and "What 
features to use?" without looking at the problem (body of 
information)in great detail. A bunch of "pure" local learners, 
armed with their local learning laws, would have no clue to these 
issues of net design, generalization and feature selection.

So, in the whole, there are a "number of activities" that need to 
be performed before any kind of "local learning" can take place. 
These aforementioned learning activities "cannot" be performed by a 
collection of "local learning" cells! There is more to the process 
of learning than simple local learning by individual cells. Many 
learning "decisions/tasks" must precede actual training by "local 
learners." A group of independent "local learners" simply cannot 
start learning and be able to reproduce the learning 
characteristics and processes of an "autonomous system" like the 
brain.

Local learning or local computation, however, is still a feasible 
idea, but only within a general global learning context. A global 
learning mechanism would be the one that "guides" and "exploits" 
these local learners or computational elements. However, it is also 
possible that the global mechanism actually does all of the 
computations (learning) and "simply sends signals" to the network 
of cells for appropriate 
synaptic adjustment. Both of these possibilities seem logical: (a) 
a "pure" global mechanism that learns by itself and then sends 
signals to the cells to adjust, or (b) a global/local combination 
where the global mechanism performs certain tasks and then uses the 
local mechanism for training/learning. 

Thus note that the global learning mechanism may actually be 
implemented with a collection of local learners or computational 
elements!! However, certain "learning decisions" are made in the 
global sense and not by "pure" local learners.

The basic argument being made here is that there are many tasks in 
a "learning process" and that a set of "local learners" armed with 
their local learning laws is incapable of performing all of those 
tasks. So local learning can only exist in the context of global 
learning and thus is only "a part" of the total learning process. 

It will be much easier to develop a consistent learning theory 
using the global/local idea.  The global/local idea perhaps will 
also give us a better handle on the processes that we call 
"developmental" and "evolutionary." And it will, perhaps, allow us 
to better explain many of the puzzles and inconsistencies in our 
current body of discoveries about the brain. And, not the least, it 
will help us construct far better algorithms by removing the 
"unwarranted restrictions" imposed on us by the current ideas. Any 
comments on these ideas and possibilities are welcome.
	

Asim Roy
Arizona State University





More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net