AIDA: An Active Inference-based Design Agent for Audio Processing Algorithms

Research output: Contribution to journalArticleAcademicpeer-review

52 Downloads (Pure)

Abstract

In this paper we present AIDA, which is an active inference-based agent that iteratively designs a personalized audio processing algorithm through situated interactions with a human client. The target application of AIDA is to propose on-the-spot the most interesting alternative values for the tuning parameters of a hearing aid (HA) algorithm, whenever a HA client is not satisfied with their HA performance. AIDA interprets searching for the "most interesting alternative" as an issue of optimal (acoustic) context-aware Bayesian trial design. In computational terms, AIDA is realized as an active inference-based agent with an Expected Free Energy criterion for trial design. This type of architecture is inspired by neuro-economic models on efficient (Bayesian) trial design in brains and implies that AIDA comprises generative probabilistic models for acoustic signals and user responses. We propose a novel generative model for acoustic signals as a sum of time-varying auto-regressive filters and a user response model based on a Gaussian Process Classifier. The full AIDA agent has been implemented in a factor graph for the generative model and all tasks (parameter learning, acoustic context classification, trial design, etc.) are realized by variational message passing on the factor graph. All verification and validation experiments and demonstrations are freely accessible at our GitHub repository.
Original languageEnglish
Article number842477
Number of pages20
JournalFrontiers in Signal Processing
Volume2
DOIs
Publication statusPublished - 7 Mar 2022

Fingerprint

Dive into the research topics of 'AIDA: An Active Inference-based Design Agent for Audio Processing Algorithms'. Together they form a unique fingerprint.

Cite this