Memory (and SOMs)

Matt Jones jonesmat at
Tue Feb 19 13:09:10 EST 2002

"yan king yin" <y.k.y@(dont spam)> wrote in message news:<ZB2b8.467$B92.91629 at>...
> My books have just arrived and I have a better understanding of SOMs
> now =)  From what I read, SOMs extract the "principle components"
> of a set of data that is presented to the network repeatedly over a
> period of time (the training). 

Technically, "principal components" are orthogonal to each other. That
is, they are the principal axes of the hyperellipsoid along which the
training data are scattered. They are also the eigenvectors of the
covariance matrix of the training data vectors, and the corresponding
eigenvalues are the variance along each principal component direction.
Principal component analysis presupposes that the data vectors are
distributed normally (i.e., Gaussian) in whatever data space they live

The SOM, and ANNs in general, do not make this normaility assumption.
So they do not necessarily extract the orthogonal principal
components. They can extract "main statistical features" (not a
technical term) or "regularities" from the data however, which are
somewhat like principal components. Within a certain regime, SOMs
-can- actually extract principal components, but that requires special
constraints. I think SOMs probably do something more similar to
"Independent Component Analysis", another statistical technique that
doesn't require the normaility assumption.



More information about the Neur-sci mailing list