Conducting Mission-Critical Voice Experiments with Automated Speech Recognition and Crowdsourcing
Jan Janak, Kahlil Dozier, Lauren Berny, Liang Hu, Dan Rubenstein, Charles Jennings, Henning Schulzrinne
公開日: 2025/9/17
Abstract
Mission-critical voice (MCV) communications systems have been a critical tool for the public safety community for over eight decades. Public safety users expect MCV systems to operate reliably and consistently, particularly in challenging conditions. Because of these expectations, the Public Safety Communications Research (PSCR) Division of the National Institute of Standards and Technology (NIST) has been interested in correlating impairments in MCV communication systems and public safety user quality of experience (QoE). Previous research has studied MCV voice quality and intelligibility in a controlled environment. However, such research has been limited by the challenges inherent in emulating real-world environmental conditions. Additionally, there is the question of the best metric to use to reflect QoE accurately. This paper describes our efforts to develop the methodology and tools for human-subject experiments with MCV. We illustrate their use in human-subject experiments in emulated real-world environments. The tools include a testbed for emulating real-world MCV systems and an automated speech recognition (ASR) robot approximating human subjects in transcription tasks. We evaluate QoE through a Levenshtein Distance-based metric, arguing it is a suitable proxy for measuring comprehension and the QoE. We conducted human-subject studies with Amazon MTurk volunteers to understand the influence of selected system parameters and impairments on human subject performance and end-user QoE. We also compare the performance of several ASR system configurations with human-subject performance. We find that humans generally perform better than ASR in accuracy-related MCV tasks and that the codec significantly influences the end-user QoE and ASR performance.