Task Classification Model for Visual Fixation, Exploration, and Search

Ayush Kumar, Anjul Tyagi, Michael Burch, Daniel Weiskopf, Klaus Mueller

Research output: Contribution to journalArticleAcademic

Abstract

Yarbus' claim to decode the observer's task from eye movements has received mixed reactions. In this paper, we have supported the hypothesis that it is possible to decode the task. We conducted an exploratory analysis on the dataset by projecting features and data points into a scatter plot to visualize the nuance properties for each task. Following this analysis, we eliminated highly correlated features before training an SVM and Ada Boosting classifier to predict the tasks from this filtered eye movements data. We achieve an accuracy of 95.4% on this task classification problem and hence, support the hypothesis that task classification is possible from a user's eye movement data.
Original languageEnglish
Article number1907.12635
JournalarXiv.org, e-Print Archive, Mathematics
Volume2019
DOIs
Publication statusPublished - 29 Jul 2019

Keywords

  • Machine Learning

Fingerprint

Dive into the research topics of 'Task Classification Model for Visual Fixation, Exploration, and Search'. Together they form a unique fingerprint.

Cite this