NASA develops 'mind-reading' system

Allen L. Barker alb at
Sat Mar 20 04:43:31 EST 2004


NASA figuring out ways to decipher silent speech
By Glennda Chui
Posted on Thu, Mar. 18, 2004
Mercury News

When you read silently to yourself, not even moving your lips, the
muscles of your throat, tongue and vocal cords twitch
imperceptibly. That faint movement can be detected and translated into

Scientists at NASA's Ames Research Center in Mountain View say they
have done just that. By sticking sensors under a person's chin and on
either side of the Adam's apple, they were able to pick up the nerve
signals that trigger tiny muscle movements and turn them into commands
that drive a model rover or perform a simple Web search.

Although the work is very preliminary, it could someday be used in
voice recognition systems and to help people communicate clearly in
noisy environments -- from space stations to air traffic control
towers. It could help people who have lost their ability to speak, or
allow someone to chat with a colleague across a conference table
without making a sound.

``This is not like we're reading thoughts. We're reading the results
of those thoughts,'' said Chuck Jorgensen, chief scientist for
neuroengineering at Ames.

The work is part of a five-year quest to find ways of controlling
machines and computers by tapping the signals given off by nerves as
muscles move, Jorgensen said. First the team came up with a way to
control and land an airliner using broad hand gestures; it's been
tested in a sophisticated flight simulator. Next they entered data
into a computer without a keyboard, by waving their fingers.

``The next step was, `Well, how fine a signal can we pick up?' ''
Jorgensen said.

The researchers took advantage of the fact that when most people read
silently, they move their tongues and the muscles in their throat in
small, subtle ways. It's as if ``you whisper, and you get quieter and
quieter and quieter, until finally you don't move your mouth at
all. You don't make a tone,'' Jorgensen said.

So far, the Ames team has trained a computer program to recognize a
few words and digits, such as ``stop'' or ``nine,'' when pronounced in
this silent, ``subvocal'' way. Now they hope to expand the program's
vocabulary and to develop sensors that read nerve signals through
clothing, rather than being stuck on the skin.


NASA develops 'mind-reading' system
16:50 18 March 04 news service
Maggie McKee

A computer program which can read words before they are spoken by
analysing nerve signals in our mouths and throats, has been developed
by NASA.

Preliminary results show the button-sized sensors, which attach under
the chin and on either side of the Adam's apple and pick up nerve
signals from the tongue, throat, and vocal cords, can indeed be used
to read minds.

"Biological signals arise when reading or speaking to oneself with or
without actual lip or facial movement," says Chuck Jorgensen, a
neuroengineer at NASA's Ames Research Center in Moffett Field,
California, in charge of the research.

The sensors have already been used to do simple web searches and may
one day help space-walking astronauts and people who cannot talk
communicate. The sensors could send commands to rovers on other
planets, help injured astronauts control machines, or aid the

In everyday life, they could even be used to communicate on the sly -
people could use them on crowded buses without being overheard, say
the NASA scientists.

Web search

For the first test of the sensors, scientists trained the software
program to recognise six words - including "go", "left" and "right" -
and 10 numbers. Participants hooked up to the sensors thought the
words to themselves and the software correctly picked up the signals
92 per cent of the time.

Then researchers put the letters of the alphabet into a matrix with
each column and row labelled with a single-digit number. In that way,
each letter was represented by a unique pair of number
co-ordinates. These were used to silently spell "NASA" into a web
search engine using the mind-reading program.

"This proved we could browse the web without touching a keyboard,"
says Jorgensen.

Noisy settings

Phil Green, a computer scientist focusing on speech and hearing at the
University of Sheffield, UK, called the research "interesting and
novel" on hearing the news. "If you're not actually speaking but just
thinking about speaking then at least some of the messages still get
sent from the brain to the vocal tract," he says.

But he cautions the preliminary tests may have been successful because
of the short lengths of the words and suggests the test be repeated on
many different people to test the sensors work on everyone.

The initial success "doesn't mean it will scale up", he told New
Scientist. "Small-vocabulary, isolated word recognition is a quite
different problem than conversational speech, not just in scale but in

He says conventional voice-recognition technology is more powerful
than the apparent results of these sensors, and that "the obvious
thing is to couple this with acoustics" to enhance communication in
noisy settings.

The NASA team is now working on sensors that will detect signals
through clothing.


NASA hears words not yet spoken	
Wed Mar 17, 6:28 PM ET

WASHINGTON (AFP) - NASA (news - web sites) has developed a computer
program that comes close to reading thoughts not yet spoken, by
analyzing nerve commands to the throat.

It says the breakthrough holds promise for astronauts and the

"A person using the subvocal system thinks of phrases and talks to
himself so quietly it cannot be heard, but the tongue and vocal cords
do receive speech signals from the brain," said developer Chuck
Jorgensen, of NASA's Ames Research Center, Moffett Field, California.

Jorgensen's team found that sensors under the chin and one each side
of the Adam's apple pick up the brain's commands to the speech organs,
allowing the subauditory, or "silent speech" to be captured.

The team concluded that the method could be useful on space missions
or other difficult working conditions, such as air traffic control
towers and even to make current voice-recognition software more

"What is analyzed is silent, or subauditory, speech, such as when a
person silently reads or talks to himself," Jorgensen said.

"Biological signals arise when reading or speaking to oneself with or
without actual lip or facial movement."

On early trials, the program could recognize with 92 percent accuracy
six words and 10 numbers that the team repeated sub-vocally.

The first words were "stop," "go," "left," "right," "alpha," and

Then, the inventors gave each letter of the alphabet a set of digital

"We took the alphabet and put it into a matrix -- like a calendar,"
Jorgensen said.

"We numbered the columns and rows and we could identify each letter
with a pair of single-digit numbers.

"So we silently spelled out 'NASA' and then submitted it to a
well-known Web search engine. We electronically numbered the Web pages
that came up as search results. We used the numbers again to choose
Web pages to examine. This proved we could browse the Web without
touching a keyboard."

The next trial will command a robot similar to the Rovers currently
exploring Mars.

"We can have the model Rover go left or right using silently 'spoken'

"A logical spin-off would be that handicapped persons could use this
system for a lot of things," he said, as well as persons wanting to
speak by telephone without being overheard.

To reach that goal, the team plans to build a dictionary of English
words recognizable by speech recognition software.

The equipment will need improved amplifiers to strengthen the
electrical nerve signals, which are now run through noise reduction
equipment before they can be analyzed.

"The keys to this system are the sensors, the signal processing and
the pattern recognition, and that's where the scientific meat of what
we're doing resides." Jorgensen said.


NASA Develops System To Computerize Silent, "Subvocal Speech"



Mind Control: TT&P ==>
Home page:
Allen Barker

More information about the Neur-sci mailing list