header

Heard-Text Recall and Listening Effort under Irrelevant Speech and Pseudo-Speech in Virtual Reality


Cosima A. Ermert, Sabine Janina Schlittmeier, Andrea Bönsch, Torsten Wolfgang Kuhlen, Janina Fels
to be published in: Acta Acustica united with Acustica
pubimg

Introduction: Verbal communication depends on a listener’s ability to accurately comprehend and recall information conveyed in a conversation. The heard-text recall (HTR) paradigm can be used in a dual-task design to assess both memory performance and listening effort. In contrast to traditional tasks such as serial recall, this paradigm uses running speech to simulate a conversation between two talkers. Thereby, it allows for talker visualization in virtual reality (VR), providing co-verbal visual cues like lip-movements, turn-taking cues, and gaze behavior. While this paradigm has been investigated under pink noise, the impact of more realistic irrelevant stimuli, such as speech, that provide temporal fluctuations and meaning compared to noise, remains unexplored. Methods: In this study (N = 24), the HTR task was administered in an immersive VR environment under three noise conditions: silence, pseudo-speech, and speech. A vibrotactile secondary task was administered to quantify listening effort. Results: The results indicate an effect of irrelevant speech on memory and speech comprehension as well as secondary task performance, with a stronger impact of speech relative to pseudo-speech. Discussion: The study validates the sensitivity of the HTR in a dual-task design to background speech stimuli and highlights the relevance of linguistic interference-by-process for listening effort, speech comprehension, and memory.





Datenschutzerklärung/Privacy Policy Home Visual Computing institute RWTH Aachen University