Gregor Hoffleit <flight at mathi.uni-heidelberg.de> in comp.ai.nat-lang:
> I'm looking for information about research regarding parallels in
> vision and language understanding. Does anybody know of projects
> dealing with this relation ? [...]
A linguistic theory of mind available by Web search on "Mentifex"
uses visual engrams made of extracted features as memory "slices"
which cause recognized visual inputs to activate verbal concepts:
/^^^^^^^^^^^\ Vision > Consciousness < Audition /^^^^^^^^^^^\
/ inputs to \ associative________ / inputs to \
|visual/--------|-------\ memory / syntax \ |auditory memory|
|memory| recog-|nition | \________/<--|-------------\ |
| ___|___ | | flush-vector| spiral| _______ | |
| /image \ | __|___ ___V___ loop| /stored \ | |
| / percept \ | /deep \<-----/lexical\<---|--/ phonemes\| |
| \ engrams /<--|-->/concepts\--->/concepts \---|->\ of words/ |
| \_______/ | \________/ \_________/ | \_______/ |