hearing aids

Jeffrey Sirianni sirianni at uts.cc.utexas.edu
Wed Mar 22 09:31:48 EST 1995

In article <3kmct4$60u at news.rwth-aachen.de>, dak at kaa.informatik.rwth-aachen.de (David Kastrup) says:
>jochenw at messua.informatik.rwth-aachen.de (Jochen Wolters) writes:
>>dak at messua.informatik.rwth-aachen.de (David Kastrup) writes:

>>The MultiFocus has two microphones that
>>can be set to either "non-directional" for music, traffic, etc. and to
>>"directional" for person-to-person communication. It then focuses to the
>>sound source in front of the hearing impaired.
>To call that mimicking the the focusing of the brain is, of course, rather
>preposterous. This "focussing" is simply a directional microphone array
>arrangement. These things help, of course, but the advantages are strictly
>limited (you cannot gain more than 3dB SNR per doubling of the microphone
>number, except when you use directional microphones. Using them, the focus
>direction is fixed).
I'm not sure what the Multifocus does with the 2 microphone signals, but
I wonder if they are using the info to cross-correlate and remove redundant
info (i.e. noise).  I just read through the Brey, R.H. et al. (1987) article
which used a 2 microphone set-up with LSM (Least Mean Squares) adaptive
filtering.  I'm wondering if the same approach has been applied directed onto
a wearable aid?  Any info would be appreciated....

Article Ref: Brey, R.H. et al. (1987). Improvement in speech intelligibility in
	noise employing an adaptive filter with normal and hearing-impaired subjects.
	J. Rehab. Res. Devel. 24(4):75-86.

Jeff Sirianni     @(((<{
University of Texas at Austin
Communication Sciences and Disorders
CMA, 2nd Floor Clinic
Austin, TX  78712-1089
sirianni at uts.cc.utexas.edu
jgsaudio at aol.com

More information about the Audiolog mailing list