Hearing Loss and Neurotechnology: New Approaches to Improve Speech Perception
Comprehending speech in real environments is the most profound daily challenge for listeners with hearing loss — and for many more with healthy hearing. In our lab we aim to:
- understand the basic brain mechanisms that support comprehension
- diagnose how different listeners fail to comprehend speech-in-noise and
- treat hearing loss with assistive devices that cope with dynamic, acoustically cluttered scenes
In one line of research, we aim to understand the basic brain mechanisms of speech in noise, including how selective attention, auditory-visual integration (watching a talker’s face), and “filling in” degraded speech can improve understanding.
In a second area of research, we have developed a novel EEG diagnosis that provides a rapid, hierarchical view into the functional health of the auditory-speech system – from the ear to the cortex – including how different processing stages may interact. Combining natural speech acoustics with FM sweep chirps, this CHEECH (chirp-speech) approach can be used in any real-world perceptual, linguistic, or cognitive test. We are presently applying this engineered speech in older adults with hearing loss and — in collaboration with Dr. David Corina — to characterize auditory/speech development and auditory-visual plasticity in children with cochlear implants.
A third set of projects in our lab aims to treat impaired comprehension with an assistive device that uses eye gaze tracking and microphone array beamforming to serve as an “attentional prosthesis”: wherever a listener looks, she will hear that sound best. The system, developed in collaboration with Dr. Sanjay Joshi, is implemented on a mobile platform (Android) and incorporates virtual 3-D acoustic cues to improve real-world comprehension both in individuals with hearing loss as well as healthy listeners in ‘cocktail parties’.
The Miller Lab is located in the Center for Mind and Brain. The lab houses behavioral testing rooms, offices for researchers and visitors, and meeting space. A double-walled anechoic, electromagnetically shielded room contains our high-density (128-channel) EEG acquisition system. All audio/visual recording and presentation uses studio-grade equipment. Computing resources include a shared Linux cluster with the aggregate processing power of thirteen 64-bit, dual- or quad-core processors, 164 GB of memory, and 4.5 TB of disk storage.
For functional MRI, we have full access to three research-dedicated scanners at the UC Davis Imaging Research Center (http://ucdirc.ucdavis.edu/facilities/index.php).
Most of our scanning is performed on a Siemens 3T TRIO with 8-channel phased-array headcoil. Within the scanner, auditory stimuli are presented via an MR-compatible electrodynamic headphone system producing high-fidelity sound (http://www.mr-confon.de/en/index.html). Visual stimuli are presented with a three-chip DLP projector.
State-of-the-art hearing aids