SERAB: A multi-lingual benchmark for speech emotion recognition

Neil Alexandre Scheidwasser, Mikolaj Kegler, Pierre Beckmann, Milos Cernak

Research output: Contribution to conferencePaperResearch

7 Downloads (Pure)

Abstract

Recent developments in speech emotion recognition (SER) often leverage deep neural networks (DNNs). Comparing and benchmarking different DNN models can often be tedious due to the use of different datasets and evaluation protocols. To facilitate the process, here, we present the Speech Emotion Recognition Adaptation Benchmark (SERAB), a framework for evaluating the performance and generalization capacity of different approaches for utterance-level SER. The benchmark is composed of nine datasets for SER in six languages. Since the datasets have different sizes and numbers of emotional classes, the proposed setup is particularly suitable for estimating the generalization capacity of pre-trained DNN-based feature extractors. We used the proposed framework to evaluate a selection of standard hand-crafted feature sets and state-of-the-art DNN representations. The results highlight that using only a subset of the data included in SERAB can result in biased evaluation, while compliance with the proposed protocol can circumvent this issue.
Original languageEnglish
Publication date7 Oct 2021
Publication statusPublished - 7 Oct 2021
Externally publishedYes
Event47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Virtual, Online, Singapore
Duration: 23 May 202227 May 2022

Conference

Conference47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
Country/TerritorySingapore
CityVirtual, Online
Period23/05/202227/05/2022
SponsorChinese and Oriental Languages Information Processing Society (COLPIS), Singapore Exhibition and Convention Bureau, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), The Institute of Electrical and Electronics Engineers Signal Processing Society

Keywords

  • cs.SD
  • cs.AI
  • cs.LG
  • eess.AS

Cite this