The system, which is still in prototype, works by connecting off-the-shelf noise-canceling headphones to a smartphone app. The microphones embedded in these headphones, which are used to cancel out noise, are repurposed to also detect the sounds in the world around the wearer. These sounds are then played back to a neural network, which is running on the smartphone; then certain sounds are boosted or suppressed in real time, depending on the user’s preferences. It was developed by researchers from the University of Washington, who presented the research at the ACM Symposium on User Interface Software and Technology (UIST) last week.

The team trained the network on thousands of audio samples from online data sets and sounds collected from various noisy environments. Then they taught it to recognize 20 everyday sounds, such as a thunderstorm, a toilet flushing, or glass breaking.

It was tested on nine participants, who wandered around offices, parks, and streets. The researchers found that their system performed well at muffling and boosting sounds, even in situations it hadn’t been trained for. However, it struggled slightly at separating human speech from background music, especially rap music.

Mimicking human ability

Researchers have long tried to solve the “cocktail party problem”—that is, to get a computer to focus on a single voice in a crowded room, as humans are able to do. This new method represents a significant step forward and demonstrates the technology’s potential, says Marc Delcroix, a senior research scientist at NTT Communication Science Laboratories, Kyoto, who studies speech enhancement and recognition and was not involved in the project. 

Source link