[Neuroscience] Re: Mind.forth: convergence of comp.arch and brain.arch

Peter F 19eimc_minus19 at ozemail.com.au
Fri Apr 28 12:16:11 EST 2006

"Sonia Lindsay" <sonialindsay at dsl.pipex.com> wrote in message 
news:mailman.763.1146233836.16885.neur-sci at net.bio.net...
>I have recently become interested in the concept of AI and particularly the
> interaction between human and AI cognitive functions.
> Can anyone help me with the following questions?
> Speech - Mind Module
> I was particularly interested in this section of the speech mind module:
> ..It is theoretically possible that the Speech motorium may contain
> dynamic muscle-activation speech-production engrams complementing
> or matching the phonemic memory-storage engrams of words recorded
> in the auditory memory channel. Through a process of continuous
> comparison and training, the mind may maintain its motor ability
> to speak and pronounce the phonetic words of its native language.
> Such a dual-storage schema means that the words of an utterance
> are not transferred directly from passive phonemic memory into
> active motor memory, but are reduplicated in speech motor memory
> on a ready-to-go basis as thoughts materialize in auditory memory
> and as the free will of volition decides whether or not to speak.

There is no such thing as "free will of volition" that decides us doing
something rather than something else.

All our 'choosing' between which of our different "actentions"
[amalgam of activity and "attention" referring to from the simplest possible 
muscle-work involving, and/or mental, reflex to the most sophisticated and 
skilled theoretical or practical preoccupation] on our obviously modular 
repertoire [of course neither a largely neat, nor an on the whole known, 
modularity] of actentions within the actention selection system (ASS alt. 
AS) that we as individuals are going to pour our vital/adaptive energy into, 
and that thereby are to become a transient "dominant" (here a presumed, 
ideally identifiable, pattern of neuronal activity) within the ASS] is the 
ongoing result of the relative weighting (of these actention modules) by 
instincts (i.e. phylogenetically accumulated as a consequence of the 
evolutionary pressure totality), (epi)genetic imprints, different learnt 
modifications the structures and functions (or functure) of neurons 
(individually accumulated by lifetime experiences), and 'cheering' or 
'booing' (metaphorically so put) by concurrent environmental 
factors/features of influence.

> Is it actually possible to get a human brain to speak through an AI 
> device?
> And Is it actually possible for an AI device to pick up auditory memory 
> from
> a human brain before the humans conscious brain has actually decides to
> speak the words, and speak them for the brain?


> If so how this would be achieved?  Can it be achieved using fMRI scans of
> the human brain?

For one thing, an fMRI device's resolution is far too crude.

> Image to Concept Visual Recognition
> Is it possible to get an AI device to actually see through human eyes. 
> That
> is can visual input from the human brain be channelled into an AI device 
> to
> produce what someone is actually seeing?

No again - also because there is a 'tiny' problem with achieving
the required wiring. :-)

> If so how this would be achieved?  Can it be achieved using fMRI scans of
> the human brain?

Same answer.

> Any assistance that you can give me in this matter will be greatly
> appreciated.

Although there are limits to what we can construct, and find out, brave and 
undaunted surmising like yours is needed if we are to get as close to those 
ultimate limits as we possibly can! :-)


More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net