Acoustic scene classification from few examples

Ivan Bocharov, Tjalling Tjalkens, Bert de Vries

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Citation (Scopus)
359 Downloads (Pure)

Abstract

In order to personalize the behavior of hearing aid devices in different acoustic environments, we need to develop personalized acoustic scene classifiers. Since we cannot afford to burden an individual hearing aid user with the task to collect a large acoustic database, we aim instead to train a scene classifier on just one (or maximally a few) in-situ recorded acoustic waveform of a few seconds duration per scene. In this paper we develop such a”one-shot” personalized scene classifier, based on a Hidden Semi-Markov model. The presented classifier consistently outperforms a more classical Dynamic-Time-Warping-Nearest-Neighbor classifier, and correctly classifies acoustic scenes about twice as well as a (random) chance classifier after training on just one recording of 10 seconds duration per scene.

Original languageEnglish
Title of host publication2018 26th European Signal Processing Conference, EUSIPCO 2018
Place of PublicationPiscataway
PublisherInstitute of Electrical and Electronics Engineers
Pages862-866
Number of pages5
Volume2018-September
ISBN (Electronic)9789082797015
ISBN (Print)978-90-827970-1-5
DOIs
Publication statusPublished - Sept 2018
Event26th European Signal Processing Conference, EUSIPCO 2018 - Rome, Italy
Duration: 3 Sept 20187 Sept 2018

Conference

Conference26th European Signal Processing Conference, EUSIPCO 2018
Abbreviated titleEUSIPCO 2018
Country/TerritoryItaly
CityRome
Period3/09/187/09/18

Fingerprint

Dive into the research topics of 'Acoustic scene classification from few examples'. Together they form a unique fingerprint.

Cite this