CSL Researcher wins Best Student Paper Award for work on deformable microphone arrays
Trying to hold a conversation at a restaurant while the combination of ambient music, clanking dishes, and chatter from the adjacent table drowns out nearly every word seems to be a universal experience. While noisy environments are challenging for everyone, they are especially difficult for people with hearing loss, and, unfortunately, these noisy environments are where traditional hearing aids falter most. To address these deficits, recent CSL graduate and current postdoctoral fellow Ryan M. Corey, CSL professor Andrew Singer, and their team of students at the Illinois Augmented Listening Laboratory are developing new augmented listening technologies that could help hearing aids perform much better in noisy environments.
Unlike conventional hearing aids that have only a few microphones, these new listening devices have dozens of microphones spread across the body. These sensors can be used to perform spatial processing to separate sounds from different directions and make them louder or quieter. Microphone array technology, which is already widely used in audio devices like smart speakers and teleconferencing systems, can benefit not only traditional hearing aids but also “hearables” and augmented-reality headsets for normal-hearing users.
The group has already demonstrated large wearable microphone arrays that can reduce background noise in crowded situations. Until recently, however, most of their experiments were on mannequins. To use these devices on real humans, Corey and his team must deal with a new challenge: humans move!
“Our mannequin, Mike A. Ray, is very good at standing still for as long as we need to finish an experiment,” said Corey, “but real humans move constantly, even if they’re trying not to.”
This becomes an issue as digital signal processing filters known as beamformers rely on spatial information -- as measured by the time delays between signals and microphones -- to separate and enhance sounds. On a rigid device like a smart speaker, those delays do not change, so the beamformer is relatively easy to implement. With deformable microphone arrays, however, the microphones on different locations of the body are constantly moving relatively to one another, so those time delays are always changing.
In their paper, “Motion-Tolerant Beamforming with Deformable Microphone Arrays,” Corey and Singer, the Fox Family Professor in Electrical and Computer Engineering, describe the impact of motion on microphone arrays and explore several ways to address the problem. The paper was recognized with a Best Student Paper Award at the 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) in New Paltz, New York.
The paper compared several possible approaches to the problem, including a conventional beamformer that simply ignores deformation and a powerful but cumbersome motion-tracking system. Corey suggested that the best approach might be a hybrid system that uses multiple small, rigid arrays as part of a larger wearable system. On their own, the small arrays would be responsible for high-frequency sounds. Together, the overall deformable array would manage the lower-frequency sounds.
“This is a hard problem, and my goal with the paper was not to solve it but to understand why it’s so hard and to convince other researchers that it’s important to study,” Corey said. “That’s why I was so thrilled to win this award- it gives me hope that other researchers will help to develop new techniques for deformable microphone arrays that will ultimately lead to better hearing aids for people who need them.”
As part of Corey’s WASPAA presentation, there was a video of him performing the “Mic-Array-Na” dance while wearing a dozen microphones. To see the video and learn more about the team’s work, head to the Augmented Listening Laboratory blog.