Abstract
As a neurophysiological response to threat or adverse conditions, stress can affect cognition, emotion and behaviour with potentially detrimental effects on health in the case of sustained exposure. Since the affective content of speech is inherently modulated by an individual's physical and mental state, a substantial body of research has been devoted to the study of paralinguistic correlates of stress-inducing task load. Historically, voice stress analysis (VSA) has been conducted using conventional digital signal processing (DSP) techniques. Despite the development of modern methods based on deep neural networks (DNNs), accurately detecting stress in speech remains difficult due to the wide variety of stressors and considerable variability in the individual stress perception. To that end, we introduce a set of five datasets for task load detection in speech. The voice recordings were collected as either cognitive or physical stress was induced in the cohort of volunteers, with a cumulative number of more than a hundred speakers. We used the datasets to design and evaluate a novel self-supervised audio representation that leverages the effectiveness of handcrafted features (DSP-based) and the complexity of data-driven DNN representations. Notably, the proposed approach outperformed both extensive handcrafted feature sets and novel DNN-based audio representation learning approaches.
Original language | Undefined/Unknown |
---|---|
Publication date | 30 Mar 2022 |
DOIs | |
Publication status | Published - 30 Mar 2022 |
Externally published | Yes |
Event | Interspeech 2022 - Incheon, Korea, Republic of Duration: 18 Sep 2022 → 22 Sep 2022 |
Conference
Conference | Interspeech 2022 |
---|---|
Country/Territory | Korea, Republic of |
City | Incheon |
Period | 18/09/2022 → 22/09/2022 |