Backpropagation as litmus test for mappings?

Bill Armstrong arms at cs.UAlberta.CA
Tue Feb 16 12:05:49 EST 1993

mitchm at (Mitchell Maltenfort) writes:

>	A grad student in my lab was looking at various algorithms to model
>optimal synergies of muscle activation, even though it's questionable whether
>the body is really optimal or just settles for "close enough."  He and I both
>lean towards the latter, so he is trying to show that the nervous system is
>not using any consistent optimization scheme by making comparisons between
>real data and model predictions.

>	Since this requires a "hunt-and-peck" approach, I suggested that he
>try using the real (i.e., experimental data) to try to train a backprop
>network to map force measurements to EMG (or vice-versa).  Since he just wants
>to show whether or not a consistent relation exists, and a backprop learning
>algorithm should converge to the relation if one exists, the (in)ability of a
>backprop network to create a mapping should be a litmus test, right?

>	If not, could you tell me why?  Thanks.

It sounds like a reasonable idea, except that some people complain about
the difficulty of learning certain functions by BP methods.  The failure
of BP to show a relationship might not mean that no relationship exists,
just that BP couldn't find one.

We have had success at predicting the EMG signal from sensory nerve
signals in a cat using adaptive logic networks (ALNs).  So if BP
doesn't show a relationship, you might give ALNs a try.  I think you
would find the atree software easy to use ( in
pub/atre27.exe for Windows or atree2.tar.Z for Unix).  It might pick
up something BP missed.


Prof. William W. Armstrong, Computing Science Dept.
University of Alberta; Edmonton, Alberta, Canada T6G 2H1
arms at Tel(403)492 2374 FAX 492 1071

More information about the Neur-sci mailing list