How the Brain Works and More

Brian J Flanagan bflanagn at blue.weeg.uiowa.edu
Wed Apr 24 08:34:16 EST 1996


On 23 Apr 1996, Marty Stoneman wrote:

> Paul Bush (paul at phy.ucsf.edu) wrote:
> : Here is an essay that
> : describes how I have come to understand reality. First I shall
> : explain the brain, to show that the theory is grounded in reality. I
> : will then go on to show how this explanation leads us to an objective
> : framework for knowledge representation. This theory has interesting
> : philosophical implications.
> 
> : OK, lets begin. How does the brain work?
> 
> 	[ESSAY SNIPPED]
> 
> : Paul
> : Copyright Paul Bush 1996 all rights reserved etc etc.
> 
> Thanks for posting your essay of your brain model; I'm looking forward to
> your view of the "consequences".  Your posting gave me an opportunity to
> compare our Anthrobotics "intelligent-entity" models and software to your
> "wetware" model; and there are many areas where they are very, very 
> similar.  

BJ: Thanks to both of you. Whatever our differences, this thread has been 
quite stimulating. I would also enjoy more discussion re: the items to 
follow.


> 
> Since we have worked with primarily non-distributed processing, our ideas 
> and models are necessarily more explicit than yours -- and I am wondering 
> how explicit your "objective framework for knowledge representation" is.
> 
> Some areas in which we may differ somewhat might be discussed in email if
> you wish, for example:
> 
> 1.  We had the best results very explicitly differentiating "relevancy" 
> computations from "perception/prediction" computations; among other 
> things, this means that our entity can learn new "perception/prediction" 
> things that are not "relevant", including about somewhat-wildly-new input.
> 2.  We required servicing separately "cognitive" learning -- about 
> perception-prediction -- and "relevancy" learning -- about plans and 
> goals; and the way of servicing learning was quite different in each case.
> 3.  We required forms of "knowledge representation" which appear to 
> explicitly shadow the forms of natural language and explicitly require the 
> kinds of similarities which humans talk about and learn from.
> 
> One of our most fascinating results was that we found that the kinds of 
> computation required to do our "hardest" parts broke down to an identical 
> primitive computation  -- which can be done in a massively parallel 
> arrangement -- and it might be interesting to acquaint "wetware" (and 
> neural net people?) with that to see if it helps them understand what their 
> "nets" are doing most of the time.
> 
> [We are still looking for a conference or place to publish -- but we 
> should have a home page up shortly with more details]
> 
> So thanks in advance for your next installment -- not many throw their 
> overall thinking/models into cyberspace for review.  I'll be happy to 
> have some email discussions in your areas if you do not object.
> 
> Cheers,
> 
> Marty Stoneman
> marty at indirect.com
> 
> 



More information about the Neur-sci mailing list