Abstract
The evaluation of visualization methods or designs often relies on user studies. Apart from the difficulties involved in the design of the study itself, the existing mechanisms to obtain sound conclusions are often unclear. In this work, we review and summarize some of the common statistical techniques that can be used to validate a claim in the scenarios that are commonly present in user studies in visualization, i.e., hypothesis testing. Usually, the number of participants is small and the mean and variance of the distribution are not known. Therefore, we will focus on the techniques that are adequate within these limitations. Our aim for this paper is to clarify the goals and limitations of hypothesis testing from a user study perspective, that can be interesting for the visualization community. We provide an overview of the most common mistakes made when testing a hypothesis that can lead to erroneous claims. We also present strategies to avoid those.
Original language | English |
---|---|
Title of host publication | EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3) |
Editors | Kai Lawonn, Noeska Smit, Douglas Cunningham |
Place of Publication | New York |
Publisher | Association for Computing Machinery, Inc |
Pages | 25-28 |
Number of pages | 4 |
ISBN (Electronic) | 978-3-03868-041-3 |
DOIs | |
Publication status | Published - 1 Jun 2017 |
Event | Workshop on Reproducibility, Verification, and Validation in Visualization - Brno, Czech Republic Duration: 12 Jun 2017 → 13 Jun 2017 |
Conference
Conference | Workshop on Reproducibility, Verification, and Validation in Visualization |
---|---|
Country/Territory | Czech Republic |
City | Brno |
Period | 12/06/17 → 13/06/17 |