Synaptic modification rules ?

Allen L. Barker alb at datafilter.com
Fri May 21 00:43:56 EST 2004


nettron2000 at aol.com wrote:
> "Allen L. Barker" <alb at datafilter.com> wrote in message news:<f3Yqc.15592$zO3.11430 at newsread2.news.atl.earthlink.net>...
> 
>>nettron2000 at aol.com wrote:
>>
>>>"Allen L. Barker" <alb at datafilter.com> wrote in message news:<1hfqc.17550$KE6.7993 at newsread3.news.atl.earthlink.net>...
>>>
>>>
>>>>nettron2000 at aol.com wrote:
>>>>
>>>>
>>>>>"Allen L. Barker" <alb at datafilter.com> wrote in message news:<10_pc.10522$zO3.1210 at newsread2.news.atl.earthlink.net>...
>>>>>
>>>>>
>>>>>
>>>>>>Matthew Kirkcaldie has provided a useful discussion from the
>>>>>>biological viewpoint, below.  From a theoretical perspective,
>>>>>>it is quite fascinating just what simple Hebbian networks
>>>>>>are capable of.  For a good and relatively easy-to-read
>>>>>>(given the requisite mathematical background) introduction
>>>>>>to such analyses I would strongly recommend Teuvo Kohonen's
>>>>>>_Self-Organization and Associative Memory_, Springer-Verlag,
>>>>>>1984.  (I think there is more recent version available.)
>>>>>>Grossberg has some very good articles in that area, also,
>>>>>>and there is a particular article I'd like to recommend,
>>>>>>but I don't have that paper or reference at hand right
>>>>>>now.
>>>>>>
>>>>>>Matthew Kirkcaldie wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>>In article <ec29a509.0405161715.46916f1f at posting.google.com>,
>>>>>>>nettron2000 at aol.com wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>Ive bin recently reading about a synaptic modification rule discovered
>>>>>>>>by Donald Hebb ( Im assuming this is related to the Pavlovian
>>>>>>>>conditioning experiments?) in which a synapse is modified depending on
>>>>>>>>whether a pre-synaptic spike occurs before or after a post-synaptic
>>>>>>>>spike ( still somewhat unclear about that one), but are there other
>>>>>>>>"rules" that govern synaptic modification ?
>>>>>>>
>>>>>>>
>>>>>>>Hebbian learning isn't a rule - it was a concept Hebb thought up to 
>>>>>>>suggest how synapses might be changed according to the activity of the 
>>>>>>>cells sending and receiving them, in order that experience would shape 
>>>>>>>the connections between neurons.  The idea is if two cells are usually 
>>>>>>>active at the same time, this activity would cause the synapses between 
>>>>>>>them to become stronger.  If their activity occurred at different times, 
>>>>>>>the connection would become weaker.  Conceptually, he showed that this 
>>>>>>>was enough to explain some kinds of behaviour and learning, so he 
>>>>>>>guessed that a process like this might operate in the nervous system, 
>>>>>>>without knowing what that process was.
>>>>>
>>>>>
>>>>>
>>>>> For clarity i'll post Hebb's concept ( if you will) here:
>>>>>
>>>>>"When an axon of cell A is near enough to excite cell B and
>>>>>repeatedly or persistently takes part in firing it, some growth
>>>>>process or metabolic change takes place in one or both cells such that
>>>>>A's efficiency, as one of the cells firing B, is increased."
>>>>>
>>>>>Although this idea doesnt account for depression, how did Hebb guess
>>>>>this concept ? I know there are other related concepts to this such as
>>>>>anti-hebbian and what not, but does anyone know of other "rules" ( i
>>>>>use the term loosely) that can account for synaptic modification ?
>>>>
>>>>In a modern analytical context, such rules are expressed as
>>>>differential equations.  I'm not enough of a historian of
>>>>neuroscience to guess at how Hebb came up with the concept.
>>>>There are many different synaptic modification rules that
>>>>one can consider.  I recommended the Kohonen book above
>>>>because he explicitly analyzes several different such rules.
>>>>Doing the math (and simulations) he shows that large systems
>>>>of neurons all operating by Hebb-like rules can give rise to
>>>>collective, "emergent" properties such as associative
>>>>memory.
>>>
>>>
>>>...and im not enough of a mathematician or programmer to thoroughly
>>>understand Kohonen networks. Maybe i'm asking too much, but i'd really
>>>like to know how these rules translate unto biological networks. Is
>>>there any clinical or experimental proof of these concepts that you or
>>>anyone can point out ? In my o.p. i brought up Hebb's concept because
>>>it was the only one i knew anything about. After some exhaustive
>>>searching/reading (and a headache to boot) via Google, found that
>>>modification takes place not exactly when spikes occur at the same
>>>instant in time but rather during a certain time window, dont quote me
>>>on this but something like 50 ms or so ? unless of course, one accepts
>>>50 ms as occurring at the same time.
>>>
>>
>>Let me first mention mathematical modeling in general.  You can
>>mathematically model a neuron and its synaptic connections with
>>other neurons to almost any desired level of precision that you
>>can figure out (from scientific experiments).  You could model
>>all the chemical processes, giving rise to the electrical spikes,
>>and so forth.  That's actually useful in some contexts.
>>
>>Another approach is to deal with simplified neurons.  The above
>>approach is useful in some contexts, but for mathematical analysis
>>or even simulation studies of large networks of neurons it tends
>>to fail.  This is just like how simplifying assumptions are used
>>in physics.  If you actually had a complex mathematical model
>>like the one posited above you could presumably prove that under
>>certain conditions certain simplified models provide accurate
>>large-scale predictions.  Or you can just postulate some plausible
>>synaptic rules changes and analyze how large networks of such
>>neurons would function.  Both approaches can be useful.  From
>>an engineer's viewpoint, the brain "works" and so finding similar
>>sorts of systems which give rise to suggestive collective behavior
>>can be illuminating.  This is not to say that these are going to
>>be exact models, in the sense of the low-level model above.
>>But if the low-level model doesn't "work" when simulated, then it,
>>too, still needs some refinements.
>>
>>The neurons that Kohonen analyzes are typically simplified sigmoidal-
>>response neurons.  The simplifying assumption here is that for the
>>most part you can ignore the lower-level chemical reactions and spikes
>>and consider the signal essentially integrated over a time window.
>>This smooths out the process.  The synaptic weights are considered
>>linear and multiplicative.  Maybe think of it like taking a
>>single-cell recording from a couple of neurons, averaging over a time
>>window, and fitting to a simple electrical V=IR sort of model to find
>>dw/dt for the weight w.  That is, there *is* going to be some
>>averaged-out curve, subject to the assumptions.  (Hopfield has shown
>>that for spin-glass models of neurons the sigmoidal assumption is
>>equivalent to a mean-field approximation.)
>>
>>That is basically what Kohonen networks are.  The area of artificial
>>neural networks tends to split into those who care about biological
>>plausibility, and those who don't.  Kohonen actually has models in
>>both camps, from biologically inspired systems like topological maps
>>to practical pattern recognition algorithms.
>>
>>It all comes down to the approach, and what you're interested in.
>>I'm far more interested in large-scale complex-systems analysis than
>>I am in the chemical pathways involved in low-level synaptic change.
>>Obviously many neuroscientists are heavily interested in such details.
>>That works out fine for me.  Many neuroscientists are also
>>interested in the solid, mathematical study and modeling of large
>>systems of neurons.
> 
> 
> 
> 
>  Allen, i understand your viewpoint, its one nearly parallel with my
> own, but since im a believer in bottom-up rather than top-down
> approach to neural  network modelling,i wouldnt want to mistakingly
> throw out the baby with the bathwater. Would it be safe to ignor the
> experimentaly verified chemical processes known to exist during
> synaptic modification ? Some of the replies by knowledgable neuralists
> here on this NG has bin quite enlightening.

The replies have been interesting.  Again, "safe to ignore" is
not well-defined.  It depends on the focus and the goals of your
own research.  As a stated bottom-up believer you probably should
do a bit more research in that area, just to get a better feel for
what all is there and what importance it may have for the research
you are interested in.

> 
>>Once you get a few million neurons interacting, like many-particle
>>physical systems, you can probably safely introduce a few simplifying
>>assumptions -- just to get anywhere.  Of course if you have the time
>>and a fast enough computer you can run simulations of even extremely
>>complicated mathematical neural models (perhaps to verify that
>>certain of the simplifying assumptions are reasonable).
> 
> 
>  Yes exactly.But what would one base these assumptions on ? The
> interplay we observe in nature has Hebb and Pavlov did ? Appears they
> have bin proven somewhat correct.
>  Some have argued that a program is only as "smart" as the abilities
> of the programmer is able. Though some have over-ruled this by
> implying that ANN's learn things that the programmer wasnt aware of or
> contemplated. I came across a prime example of this awhile ago when
> reading a paper on perceptrons ( lost the link sorry), in-which it was
> observed that the device could correctly identify tanks in a wooded
> area when shown pictures of the scene ( thinking a possible military
> app i guess ). When it was shown pictures of the same scene without
> tanks in them the devise again concluded that the pictures were
> similar.The observers were puzzled at first, later found out that the
> percieved similarities between the pics by the perceptron was that the
> pics with the tanks were taken on a sunny day and the pics without
> were also taken on a sunny day. May appear humerous but I think it
> shows the commonsense "rule" rit large. To us the similarities would
> be whether there were tanks or not, to a perceptron that doesnt have
> any rules to follow anything can count as similar.
> 
> 
>  
> 
> 
>  
> 
>>
>>> 
>>>
>>>
>>>>>>>The nearest known physiological processes to Hebbian learning are 
>>>>>>>long-term potentiation and long-term depression, which are effects on 
>>>>>>>synaptic strength caused by patterns of firing and the biochemical 
>>>>>>>processes which these patterns trigger.  LTP and LTD are studied very 
>>>>>>>widely around the world in all sorts of systems, and are understood 
>>>>>>>moderately well in terms of receptors moving to and from the synapse 
>>>>>>>according to activity.  There are all kinds of reviews of LTP and LTD 
>>>>>>>ranging from the conceptual to the severely technical - if you can 
>>>>>>>indicate what you'd like to know, myself and wiser heads here could make 
>>>>>>>a recommendation.
>>>>>>>
>>>>>>>As far as "rules" go, there are no rules, just consequences of 
>>>>>>>particular firing patterns for cells which have particular membrane 
>>>>>>>properties and biochemistry.  The people trying to understand these 
>>>>>>>processes give them names and descriptions, but they're for our 
>>>>>>>convenience - there's nothing in a neuron which says "well, conditions A 
>>>>>>>and B are met, so this synapse will be altered."  It's more like inputs 
>>>>>>>A and B trigger events inside the cell, and the interaction of those 
>>>>>>>events might cause side effects which modify the strength of the synapse.
>>>>>>>
>>>>>>>Recently a very interesting mechanism has begun to be unravelled, 
>>>>>>>whereby activity at a synapse can cause the synapse to "capture" the 
>>>>>>>connection by causing DNA to be transcribed in the nucleus to make RNA, 
>>>>>>>but this RNA only becomes new protein at the synapse which was active.  
>>>>>>>So that's like another "rule" in that specific patterns of events can 
>>>>>>>trigger it, such as the receipt of a puff of the transmitter serotonin 
>>>>>>>at the right time.  Other recent studies have looked at how signalling 
>>>>>>>between presynaptic and postsynaptic membrane can maintain the physical 
>>>>>>>structure, and the role that glia have in allowing the synapse to exist 
>>>>>>>instead of pushing in to separate the cells, and how long synapses 
>>>>>>>typically last (minutes? days? years? nobody knows for sure).
>>>>>>>
>>>>>>>Anyway - nobody really knows how all our synapses are made and 
>>>>>>>maintained.  But that's what makes it all interesting.
>>>>>>>
>>>>>>>    Cheers,
>>>>>>>
>>>>>>>       Matthew.


-- 
Mind Control: TT&P ==> http://www.datafilter.com/mc
Home page: http://www.datafilter.com/alb
Allen Barker




More information about the Neur-sci mailing list