Investigating the Efficacy of Crowdsourcing on Evaluating Visual Decision Supporting System

Abstract

Crowdsourcing recently became a popular approach to substitute time consuming and expensive human subject studies, but its application is generally limited to simple and short-term experimental tasks, such as testing visual perception. The goal of this study is to test if crowdsourcing is applicable to a more complicated user study. Thus, we replicated a controlled lab study of decision-making tasks with different sorting techniques using crowdsourcing. A total of 98 participants were recruited via the Amazon Mechanical Turk service, and they participated in the study remotely through web interfaces. Experiment results indicate that performance measures of our crowdsourcing experiment was not exactly equivalent to lab experiments. However, we found potential sources of problems that we can improve to make the crowdsourcing experiment more viable.

Publication
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Date