[Neuroscience] Mind.forth: convergence of comp.arch and brain.arch

Sonia Lindsay sonialindsay at dsl.pipex.com
Fri Apr 28 04:23:37 EST 2006

I have recently become interested in the concept of AI and particularly the
interaction between human and AI cognitive functions.

Can anyone help me with the following questions?

Speech - Mind Module

I was particularly interested in this section of the speech mind module:
..It is theoretically possible that the Speech motorium may contain
dynamic muscle-activation speech-production engrams complementing
or matching the phonemic memory-storage engrams of words recorded
in the auditory memory channel. Through a process of continuous
comparison and training, the mind may maintain its motor ability
to speak and pronounce the phonetic words of its native language.
Such a dual-storage schema means that the words of an utterance
are not transferred directly from passive phonemic memory into
active motor memory, but are reduplicated in speech motor memory
on a ready-to-go basis as thoughts materialize in auditory memory
and as the free will of volition decides whether or not to speak..
Is it actually possible to get a human brain to speak through an AI device?
And Is it actually possible for an AI device to pick up auditory memory from
a human brain 
before the humans conscious brain has actually decides to speak the words,
and speak them for the brain?
If so how this would be achieved?  Can it be achieved using fMRI scans of
the human brain?
Image to Concept Visual Recognition
Is it possible to get an AI device to actually see through human eyes.  That
is can visual input from the human brain be channelled into an AI device to
produce what someone is actually seeing?
If so how this would be achieved?  Can it be achieved using fMRI scans of
the human brain?
Any assistance that you can give me in this matter will be greatly

More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net