3/23/2020 Rebecca Gorski, CSL
Written by Rebecca Gorski, CSL
Trying to hold a conversation at a restaurant while the combination of ambient music, clanking dishes, and chatter from the adjacent table drowns out nearly every word seems to be a universal experience. While noisy environments are challenging for everyone, they are especially difficult for people with hearing loss, and, unfortunately, these noisy environments are where traditional hearing aids falter most. To address these deficits, recent CSL graduate and current postdoctoral fellow Ryan M. Corey, CSL professor Andrew Singer, and their team of students at the Illinois Augmented Listening Laboratory are developing new augmented listening technologies that could help hearing aids perform much better in noisy environments.
The group has already demonstrated large wearable microphone arrays that can reduce background noise in crowded situations. Until recently, however, most of their experiments were on mannequins. To use these devices on real humans, Corey and his team must deal with a new challenge: humans move!
“Our mannequin, Mike A. Ray, is very good at standing still for as long as we need to finish an experiment,” said Corey, “but real humans move constantly, even if they’re trying not to.”
This becomes an issue as digital signal processing filters known as beamformers rely on spatial information -- as measured by the time delays between signals and microphones -- to separate and enhance sounds. On a rigid device like a smart speaker, those delays do not change, so the beamformer is relatively easy to implement. With deformable microphone arrays, however, the microphones on different locations of the body are constantly moving relatively to one another, so those time delays are always changing.In their paper, “Motion-Tolerant Beamforming with Deformable Microphone Arrays,” Corey and Singer, the Fox Family Professor in Electrical and Computer Engineering, describe the impact of motion on microphone arrays and explore several ways to address the problem. The paper was recognized with a Best Student Paper Award at the 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) in New Paltz, New York.
The paper compared several possible approaches to the problem, including a conventional beamformer that simply ignores deformation and a powerful but cumbersome motion-tracking system. Corey suggested that the best approach might be a hybrid system that uses multiple small, rigid arrays as part of a larger wearable system. On their own, the small arrays would be responsible for high-frequency sounds. Together, the overall deformable array would manage the lower-frequency sounds.
As part of Corey’s WASPAA presentation, there was a video of him performing the “Mic-Array-Na” dance while wearing a dozen microphones. To see the video and learn more about the team’s work, head to the Augmented Listening Laboratory blog.