A new perceptual model for audio coding based on spectro-temporal masking

A.G. Kohlrausch, J.G.H. Koppens, A.W.J. Oomen, S.L.J.D.E. Par, van de

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

In psychoacoustics, considerable advances have been made recently in developing computational models that can predict the discriminability of two sounds taking into account spectro-temporal masking effects. These models operate as artificial observers by making predictions about the discriminability of arbitrary signals [e.g. Dau et al. J. Acoust. Soc. Am. 99, Vol. 36(15), 1996]. Therefore, such models can be applied in the context of a perceptual audio coder. A drawback, however, is the computational complexity of such advanced models, especially because the model needs to evaluate each quantization option separately. In this contribution a model is introduced and evaluated that is a computationally lighter version of the Dau model but maintains its essential spectro-temporal masking predictions. Listening test results in a transform coder setting show that the proposed model outperforms a conventional purely spectral masking model and the original model proposed by Dau.
Original languageEnglish
Title of host publicationProceedings of the 124th Convention of the Audio Engineering Society, May 17-20, 2008, Amsterdam
Place of PublicationNew York
PublisherAES
Pagespaper nr. 7336-
Publication statusPublished - 2008
Event124th Convention of the Audio Engineering Society (AES 2008 Convention) - Amsterdam, Netherlands
Duration: 17 May 200820 May 2008
Conference number: 124

Conference

Conference124th Convention of the Audio Engineering Society (AES 2008 Convention)
Abbreviated titleAES 2008 Convention
Country/TerritoryNetherlands
CityAmsterdam
Period17/05/0820/05/08

Fingerprint

Dive into the research topics of 'A new perceptual model for audio coding based on spectro-temporal masking'. Together they form a unique fingerprint.

Cite this