Automatic impact sound generation for using in non-visual interfaces

A. Darvishi, E. Munteanu, V. Guggiana, H. Schauer, M. Motavalli, G.W.M. Rauterberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

45 Downloads (Pure)

Abstract

This paper describes work in progress on automatic generation of "impact sounds" based on purely physical modelling. These sounds can be used as non-speech audio presentation of objects and as interaction mechanisms to non visual interfaces. Different approaches for synthesizing impact sounds, the process of recording impact sounds and the analysis of impact sounds are introduced. A physical model for describing impact sounds "spherical objects hitting flat plates or beams" is presented. Some examples of impact sounds generated by the physical model and comparison of spectra of real recorded sounds and model generated impact sounds (generated via physical modelling) are discussed. The objective of this research project is to develop a concept, methods and a prototype for an audio framework. This audio framework will describe sounds on-a highly abstract semantic level. Every sound is to be described as the result of one or several interactions between one or several objects at a certain place and in a certain environment
Original languageEnglish
Title of host publicationAssets ’94 : the First Annual ACM Conference on Assistive Technologies : October 31 - November 1, 1994, Los Angeles, California
Place of PublicationNew York, NY, USA
PublisherAssociation for Computing Machinery, Inc
Pages100-106
ISBN (Print)0897916492
Publication statusPublished - 1994

Fingerprint Dive into the research topics of 'Automatic impact sound generation for using in non-visual interfaces'. Together they form a unique fingerprint.

Cite this