Abstract
We propose and analyze the use of an explicit time-context window for neural network-based spectral masking speech enhancement to leverage signal context dependencies between neighboring frames. In particular, we concentrate on soft masking and loss computed on the time-frequency representation of the reconstructed speech. We show that the application of a time-context windowing function at both input and output of the neural network model improves the soft mask estimation process by combining multiple estimates taken from different contexts. The proposed approach is only applied as post-optimization in inference mode, not requiring additional layers or special training for the neural network model. Our results show that the method consistently increases both intelligibility and signal quality of the denoised speech, as demonstrated for two classes of convolutional-based speech enhancement models. Importantly, the proposed method requires only a negligible (<2%) increase in the number of model parameters, while increasing the number of operations in a non-prohibitive manner, making it suitable for hardware-constrained applications.
Original language | English |
---|---|
Article number | 10721436 |
Pages (from-to) | 154843-154852 |
Number of pages | 10 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
Publication status | Published - 18 Oct 2024 |
Funding
This work was supported by the Robust AI for SafE (radar) signal processing (RAISE) collaboration framework between Eindhoven University of Technology and NXP Semiconductors, including a Privaat-Publieke Samenwerkingen-toeslag (PPS) supplement from the Dutch Ministry of Economic Affairs and Climate Policy.
Keywords
- audio processing
- neural networks
- noise reduction
- spectral masking
- speech enhancement
- Audio processing