Thanks very much for your post, I am very much interested in your work, please
tell me more about it. I believe we discovered some of the same principles because
knowledge has an objective structure. We can 'see' into knowledge space in
exactly the same way we see in real space or see into the future.
In article <4ljh53$4de at globe.indirect.com>, marty at indirect.com (Marty Stoneman) writes:
|> 1. We had the best results very explicitly differentiating "relevancy"
|> computations from "perception/prediction" computations; among other
|> things, this means that our entity can learn new "perception/prediction"
|> things that are not "relevant", including about somewhat-wildly-new input.
In my theory perception/prediction is a function of the cortex, a result of world
model building. Relevancy is a function of the basal ganglia, which integrates
drives from below and selects certain behavior patterns/perceptions based on this.
The basal ganglia only assigns a top level plan which filters down, meeting
bottom-up input coming up. At the bottom perception (therefore learning) is
heavily determined by input, thus learning without relevancy can occur.
|> 2. We required servicing separately "cognitive" learning -- about
|> perception-prediction -- and "relevancy" learning -- about plans and
|> goals; and the way of servicing learning was quite different in each case.
What was the difference? It seems that one is bottom-up based and one top-down
based. Do you know about the Helmholtz machine algorithm, which tries to bring
|> 3. We required forms of "knowledge representation" which appear to
|> explicitly shadow the forms of natural language and explicitly require the
|> kinds of similarities which humans talk about and learn from.
Yes. I believe the structure of our language (and our thoughts/perceptions) is a
consequence of the objective structure of knowledge and our brains.
|> One of our most fascinating results was that we found that the kinds of
|> computation required to do our "hardest" parts broke down to an identical
|> primitive computation -- which can be done in a massively parallel
|> arrangement -- and it might be interesting to acquaint "wetware" (and
|> neural net people?) with that to see if it helps them understand what their
|> "nets" are doing most of the time.
For sure. What is it?
|> [We are still looking for a conference or place to publish -- but we
|> should have a home page up shortly with more details]
Why not submit to NIPS?
|> So thanks in advance for your next installment -- not many throw their
|> overall thinking/models into cyberspace for review. I'll be happy to
|> have some email discussions in your areas if you do not object.
The reason I did it was to get feedback like this. I would like to hear more.