Paper A-3

A Front End for Adaptive Online Listening Tests

Johan Pauwels (Queen Mary University of London)*; Simon Dixon (Queen Mary University of London); Joshua D. Reiss (Queen Mary University of London)

Abstract

A number of tools to create online listening tests are currently available. They provide an integrated platform consisting of a user-facing front end and a back end to collect responses. These platforms provide an out-of-the-box solution for setting up static listening tests, where questions and audio stimuli remain unchanged and user-independent. In this paper, we detail the changes we made to the webMUSHRA platform to convert it into a front end for adaptive online listening tests. Some of the more advanced workflows that can be built around this front end include session management to resume listening tests, server-based sampling of stimuli to enforce a certain distribution over all participants, and follow-up questions based on previous responses. The back ends required for such workflows need a large amount of customisation based on the exact listening test specification, and are therefore deemed out of scope for this project. Consequently, the proposed front end is not meant as a replacement for the existing webMUSHRA platform, but as starting point to create custom listening tests. Nonetheless, a fair number of the proposed changes are also beneficial for the creation of static listening tests.