Sept. 7, 2017
Research at Michigan State University is helping hearing aid devices sort through unwelcome noises
For hearing aid users, a conversation in a busy restaurant often includes a menu of unwelcome sounds. Turn the hearing aid up and the unfiltered environment becomes a symphony of unwanted voices and annoying disturbances.

A team of electrical engineers and computer scientists at Michigan State University has put together a machine learning-based solution to end this common headache, and in real-time.
The solution proposed by Mi Zhang, assistant professor of electrical and computer engineering, and his team of PhD student Xiao Zeng, research associate Kai Cao, and undergraduate student Haochen Sun, has won third place in the National Science Foundation (NSF) Hearables Challenge.
Their solution “SharpEar: Real-Time Speech Enhancement in Noisy Environments” was recognized nationally and will be presented at the 2017 ACM UbiComp conference in Maui, Hawaii, Sept. 11-15.
“Among the challenges for people with a hearing impairment is understanding conversation in noisy environments, like restaurants,” Zhang said. “Background noises make following a conversation difficult. The problem is that existing hearing aids are essentially amplifiers that amplify all the sounds – the ones you want and also the ones you don’t want.”
The solution, Zhang said, is for developing a smart hearing aid device that can enhance the clarity of conversation and remove unwanted sounds in the background.
Xiao Zeng is lead researcher on the project. He has worked and reworked algorithms looking for the right answer.
“The biggest challenge is filtering the sound quickly – in real-time. That means processing sound very fast – in about 10 milliseconds. Otherwise, sound and moving lips don’t synch and it tends to cause dizziness or drowsiness of hearing aid users,” Zeng explained.
“Human voices are very close, complicating our task. We are having success with a machine learning-based approach and are hoping to work out the subtle frequency differences,” Cao said. “If we can do that, your cell phone should also be able to enhance voices and mitigate the other noises.”

Sun, the undergraduate member of the team, is working on the miniaturization of the hardware of the smart hearing aid device. So far, Sun said, the solution fits in the palm of his hand instead of inside an ear bud. He is majoring in electrical and computer engineering.
Zhang added, “The problem given in the NSF Hearables Challenge is very challenging. There is a lot of processing to be done within 10 milliseconds – and thus a lot of computational power needed. Squeezing such computational power into a miniature hardware is also not trivial. We’re still developing the solution, but feel we have a road map to our future plan.”
ACM UbiComp 2017
The Association for Computing Machinery (ACM) hosts an annual ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp) each year. UbiComp presentations include the design, development and deployment of ubiquitous and pervasive computing technologies and the understanding of how these technologies impact human experiences. UbiComp 2017