CrowdStudy – Crowdsourced Evaluation of Web Interfaces

While traditional usability testing methods can be both time consuming and expensive, tools for automated usability evaluation tend to oversimplify the problem by limiting themselves to supporting only certain evaluation criteria, settings, tasks and scenarios. CrowdStudy is a general web toolkit that combines support for automated usability testing with crowdsourcing to facilitate large-scale online user testing. CrowdStudy is based on existing crowdsourcing techniques for recruiting workers and guiding them through complex tasks, but implements mechanisms specifically designed for usability studies, allowing testers to control user sampling and conduct evaluations for particular contexts of use. Our toolkit provides support for context-aware data collection and analysis based on an extensible set of metrics, as well as tools for managing, reviewing and analysing any collected data.

The figure on the left illustrates the architecture and main components of CrowdStudy.

A first version of CrowdStudy was demonstrated at the ICWE 2012 conference.

Our EICS 2013 full paper presents several useful features of CrowdStudy for two different scenarios, and discusses the benefits and tradeoffs of using crowdsourced evaluation.

Publications and Supplementary Materials

Contact

Should you have any questions or comments, please feel free to contact Michael Nebeling.