This paper discusses several technical challenges in using crowdsourcing for distributed correction interfaces. The specific scenario under investigation involves the implementation of a crowd-sourced adaptive optical music recognition system (Single Interface for Music Score Searching and Analysis project). We envisage the distribution of correction tasks beyond a single workstation to potentially thousands of users around the globe. This will have the effect of producing human-checked transcriptions, as well as significant quantities of human-provided ground-truth data, which may be re-integrated into an adaptive recognition process, allowing an OMR system to "learn" from its mistakes. Drawing from existing crowdsourcing approaches and user interfaces in music (e.g., Bodleian Libraries) and non-music (e.g., CAPTCHAs) applications, this project aims to develop a scientific understanding of what makes crowdsourcing work, how to entice, engage and reward contributors, and how to evaluate their reliability. While results will be considered based on the specific needs of SIMSSA, such knowledge can be useful to a variety of musicological investigations that involve labour-intensive methods.