While traditional usability testing methods can be both time consuming and expensive, tools for automated usability evaluation tend to oversimplify the problem by limiting themselves to supporting only certain evaluation criteria, settings, tasks and scenarios. CrowdStudy is a general web toolkit that combines support for automated usability testing with crowdsourcing to facilitate large-scale online user testing. CrowdStudy is based on existing crowdsourcing techniques for recruiting workers and guiding them through complex tasks, but implements mechanisms specifically designed for usability studies, allowing testers to control user sampling and conduct evaluations for particular contexts of use. Our toolkit provides support for context-aware data collection and analysis based on an extensible set of metrics, as well as tools for managing, reviewing and analysing any collected data.
The figure on the left illustrates the architecture and main components of CrowdStudy.
Our EICS 2013 full paper presents several useful features of CrowdStudy for two different scenarios, and discusses the benefits and tradeoffs of using crowdsourced evaluation.
The main components of CrowdStudy and the user evaluations we have conducted with the help of the toolkit are described in the following paper:
|Michael Nebeling, Maximilian Speicher and Moira C. Norrie:
CrowdStudy: General Toolkit for Crowdsourced Evaluation of Web Interfaces
Proc. 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS 2013), London, England, June 2013.
Slides | Demo
A first version of CrowdStudy was described in our ICWE 2012 demo paper:
|Michael Nebeling, Maximilian Speicher, Michael Grossniklaus and Moira C. Norrie:
Crowdsourced Web Site Evaluation with CrowdStudy
Proc. 12th Intl. Conf. on Web Engineering (ICWE 2012 Demos), Berlin, Germany, July 2012.
Should you have any questions or comments, please feel free to contact Michael Nebeling.