Modeling multimodal human-computer interaction

  • Z. Obrenovic
  • , D. Starcevic

    Research output: Contribution to journalArticleAcademicpeer-review

    97 Citations (Scopus)
    594 Downloads (Pure)

    Abstract

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze in an effective flow of communication. Recent initiatives such as perceptual and attentive user interfaces put these natural human behaviors in the center of the human-computer interaction (HCI). We've designed a generic modeling framework for specifying multimodal HCI using the Object Management Group's Unified Modeling Language. Because it's a well-known and widely supported standard - computer science departments typically cover it in undergraduate courses, and many books, training courses, and tools support it - UML makes it easier for software engineers unfamiliar with multimodal research to apply HCI knowledge, resulting in broader and more practical effects. Standardization provides a significant driving force for further progress because it codifies best practices, enables and encourages reuse, and facilitates interworking between complementary tools.
    Original languageEnglish
    Pages (from-to)65-72
    Number of pages8
    JournalComputer
    Volume37
    Issue number9
    DOIs
    Publication statusPublished - 2004

    Fingerprint

    Dive into the research topics of 'Modeling multimodal human-computer interaction'. Together they form a unique fingerprint.

    Cite this