Visual priming to improve keyword detection in free speech dialogue

C. Qu, W.P. Brinkman, P. Wiggers, I.E.J. Heynderickx

Research output: Contribution to conferenceAbstractAcademic

1 Citation (Scopus)


Motivation - Talking out loud with synthetic characters in a virtual world is currently considered as a treatment for social phobic patients. The use of keyword detection, instead of full speech recognition will make the system more robust. Important therefore is the need to increase the chance that users use specific keywords during their conversation. Research approach - A two by two experiment, in which participants (n = 20) were asked to answer a number of open questions. Prior to the session participants watched priming videos or unrelated videos. Furthermore, during the session they could see priming pictures or unrelated pictures on a whiteboard behind the person who asked the questions. Findings/Design - Initial results suggest that participants more often mention specific keywords in their answers when they see priming pictures or videos instead of unrelated pictures or videos. Research limitations/ Implications - If visual priming in the background can increase the chance that people use specific keywords in their discussion with a dialogue partner, it might be possible to create dialogues in a virtual environment which users perceive as natural. Take away message - Visual priming might be able to steer people's answers in a dialogue.
Original languageEnglish
Number of pages2
Publication statusPublished - 2010
Externally publishedYes
Event28th European Conference on Cognitive Ergonomics (ECCE 2010) - Delft, Netherlands
Duration: 25 Aug 201027 Aug 2010
Conference number: 28


Conference28th European Conference on Cognitive Ergonomics (ECCE 2010)
Abbreviated titleECCE 2010
Internet address


Dive into the research topics of 'Visual priming to improve keyword detection in free speech dialogue'. Together they form a unique fingerprint.

Cite this