Synthesizing non-speech sound to support blind and visually impaired computer users

A. Darvishi, V. Guggiana, E. Munteanu, H. Schauer, M. Motavalli, G.W.M. Rauterberg

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    48 Downloads (Pure)

    Abstract

    This paper describes work in progress on automatic generation of "impact sounds" based on physical modelling. These sounds can be used as non-speech audio presentation of objects and as interaction-mechanisms to non visual interfaces. In this paper especially we present the complete physical model for impact sounds "spherical objects hitting flat plates or beams". The results of analysing of some examples of recorded (digitised) "impact sounds" and their comparisons with some theoretical aspects are discussed in this paper. These results are supposed to be used as input for the next phases of our audio framework project. The objective of this research project (joint project university of Zurich and swiss federal institute of technology) is to develop a concept, methods and a prototype for an audio framework. This audio framework shall describe sounds on a highly abstract semantic level. Every sound is to be described as the result of one or several interactions between one or several objects at a certain place and in a certain environment
    Original languageEnglish
    Title of host publicationICCHP : international conference on computers for handicapped persons : proceedings, 4th, Vienna, Austria, September 14-16, 1994
    EditorsW.L. Zagler, G. Busby, R. Wagner
    Place of PublicationBerlin
    PublisherSpringer
    Pages385-393
    ISBN (Print)3540584765
    Publication statusPublished - 1994

    Publication series

    NameLecture Notes in Computer Science
    Volume860
    ISSN (Print)0302-9743

    Fingerprint Dive into the research topics of 'Synthesizing non-speech sound to support blind and visually impaired computer users'. Together they form a unique fingerprint.

    Cite this