Abstract
In psychoacoustics, considerable advances have been made recently in developing computational models that can predict the discriminability of two sounds taking into account spectro-temporal masking effects. These models operate as artificial observers by making predictions about the discriminability of arbitrary signals [e.g. Dau et al. J. Acoust. Soc. Am. 99, Vol. 36(15), 1996]. Therefore, such models can be applied in the context of a perceptual audio coder. A drawback, however, is the computational complexity of such advanced models, especially because the model needs to evaluate each quantization option separately. In this contribution a model is introduced and evaluated that is a computationally lighter version of the Dau model but maintains its essential spectro-temporal masking predictions. Listening test results in a transform coder setting show that the proposed model outperforms a conventional purely spectral masking model and the original model proposed by Dau.
Original language | English |
---|---|
Title of host publication | Proceedings of the 124th Convention of the Audio Engineering Society, May 17-20, 2008, Amsterdam |
Place of Publication | New York |
Publisher | AES |
Pages | paper nr. 7336- |
Publication status | Published - 2008 |
Event | 124th Convention of the Audio Engineering Society (AES 2008 Convention) - Amsterdam, Netherlands Duration: 17 May 2008 → 20 May 2008 Conference number: 124 |
Conference
Conference | 124th Convention of the Audio Engineering Society (AES 2008 Convention) |
---|---|
Abbreviated title | AES 2008 Convention |
Country/Territory | Netherlands |
City | Amsterdam |
Period | 17/05/08 → 20/05/08 |