CSL graduate student receives Microsoft grant for work on augmented listening tech

7/17/2018 Katie Carr, Coordinated Science Lab

Written by Katie Carr, Coordinated Science Lab

Like many people who wear hearing aids, Ryan M. Corey is frustrated by the poor performance of the devices in noisy situations. As a PhD student in CSL and Electrical and Computer Engineering, he decided to use his knowledge of signal processing and communication systems to help make them work better. Specifically, Corey is developing devices and algorithms that will enhance audio signals in noisy environments with many competing sound sources.

Ryan M. Corey
Ryan M. Corey
Corey, who has suffered from moderate hearing loss his whole life, has worn hearing aids since high school and is very familiar with the shortcomings of today’s technology.

“Hearing aids don’t work well with background noise," Corey said. “Everyone in the field knows that, but there doesn’t appear to be a strong research focus on it now. There are some important gaps in theory that I’m hoping to fill.”

Corey was recently awarded a 2018 Microsoft Research Dissertation Grant, one of 11 recipients selected from nearly 200 applicants. The $25,000 grants were awarded to under-represented, late-stage PhD students to help promote diversity and unique perspectives in computing. Corey’s project, “Array Signal Processing for Augmented Listening,” will look at improving microphone array processing for real-time listening applications.

In addition to hearing aids, the outcomes of this research can also be applied to other augmented listening (AL) technologies, such as personal sound amplifiers, smart headphones, augmented reality headsets, and so-called “hearables,” which are wearable audio devices that modify sounds in real time to improve hearing or provide new auditory experiences. To work well in real-world environments, AL devices must track human movement, preserve the user’s spatial awareness, estimate the dynamics of multiple simultaneous sound sources, and maintain an imperceptible input-to-output delay. To address these challenges, Corey believes that we need new mathematical models and fundamentally different algorithms than those used today.

“Hearing aids today process all incoming sounds together, which can cause distortion and make it harder to hear in loud background noise,” Corey said. “I believe that this distortion is one of the biggest problems with hearing aids today, but it hasn’t gotten much attention from either researchers or industry.”

oday, most hearing aids use only two microphones. Corey suggests that hearing aids should work more like cell towers, which use dozens of antennas to let multiple users send data at the same time. He believes he can address hearing aid issues by using dozens, or even hundreds, of microphones together, which would allow the system to separate out individual sound sources and process them independently.

Corey likens his approach to a music mixing board where a user could turn up or down individual real-world sound sources on an AL device, introduce new sources, and apply different processing effects to each source independently, all in real time. However, unlike mixing boards, AL devices don’t have access to individual source tracks, so they must perform source separation on the noisy recordings from the microphones. That separation step is difficult with only two microphones but should be easier with a large array.

“A lot of the hearing aid industry is focused on machine learning using one or two microphones and big data to do better noise reduction and increase intelligibility,” Corey said. “I’m not saying that approach is wrong, but maybe there are other approaches we can take, like spatial processing with big arrays. We didn’t have the processing power or the microphone technology to do that a few years ago, but now we do. Now it’s feasible to use this technology, but we need to know how.”

To figure out how to use large microphone arrays on humans, Corey has set up a new augmented listening laboratory in CSL. It resembles a fashion studio as much as a research lab: two full-size mannequins and several artificial heads model different wearable microphone array prototypes. Corey’s favorite so far is a sombrero that supports a ring of microphones along its brim.

To test his ideas, Corey has recruited a team of undergraduate engineering and design students. Last fall, Corey and his advisor, CSL and ECE Professor Andrew Singer, started an independent study in the Technology Entrepreneurship Center (TEC)  course called “Innovation in Augmented Listening Technology.” The students conduct market research, develop prototypes, and collect and analyze data. The prototypes and data will help Corey to determine how many microphones should be used and where they should be placed to provide the most benefit to listeners. The team hopes to make that data available for other researchers to use in their own studies and in competitions and challenges.

“There isn’t really any data on microphones worn on the body, so we want to collect that data using high-quality recording equipment,” Corey said.

The Microsoft Research Dissertation Grant funding will help Corey collect that data using more powerful recording equipment and more sophisticated prototypes. It will also allow him to travel to conferences and network with other researchers in the field. Corey hopes that the new funding will increase the impact of his dissertation and future research to help people hear better.

“Today’s hearing aids work poorly with noise and we don’t really understand why that is and how we can fix it,” Corey said. “That’s the impact I’d like to have: shining a light on that problem and showing how we can process different sounds independently in a hearing aid. If I can demonstrate that, it might change the way people think about hearing aids and how they design them.”

Share this story

This story was published July 17, 2018.