TY - UNPB
T1 - Efficient Speech Quality Assessment using Self-supervised Framewise Embeddings
AU - Hajal, Karl El
AU - Wu, Zihan
AU - Scheidwasser-Clow, Neil
AU - Elbanna, Gasser
AU - Cernak, Milos
N1 - Accepted at ICASSP 2023
PY - 2022
Y1 - 2022
N2 - Automatic speech quality assessment is essential for audio researchers, developers, speech and language pathologists, and system quality engineers. The current state-of-the-art systems are based on framewise speech features (hand-engineered or learnable) combined with time dependency modeling. This paper proposes an efficient system with results comparable to the best performing model in the ConferencingSpeech 2022 challenge. Our proposed system is characterized by a smaller number of parameters (40-60x), fewer FLOPS (100x), lower memory consumption (10-15x), and lower latency (30x). Speech quality practitioners can therefore iterate much faster, deploy the system on resource-limited hardware, and, overall, the proposed system contributes to sustainable machine learning. The paper also concludes that framewise embeddings outperform utterance-level embeddings and that multi-task training with acoustic conditions modeling does not degrade speech quality prediction while providing better interpretation.
AB - Automatic speech quality assessment is essential for audio researchers, developers, speech and language pathologists, and system quality engineers. The current state-of-the-art systems are based on framewise speech features (hand-engineered or learnable) combined with time dependency modeling. This paper proposes an efficient system with results comparable to the best performing model in the ConferencingSpeech 2022 challenge. Our proposed system is characterized by a smaller number of parameters (40-60x), fewer FLOPS (100x), lower memory consumption (10-15x), and lower latency (30x). Speech quality practitioners can therefore iterate much faster, deploy the system on resource-limited hardware, and, overall, the proposed system contributes to sustainable machine learning. The paper also concludes that framewise embeddings outperform utterance-level embeddings and that multi-task training with acoustic conditions modeling does not degrade speech quality prediction while providing better interpretation.
KW - eess.AS
KW - cs.AI
KW - cs.LG
KW - cs.SD
U2 - 10.48550/arXiv.2211.06646
DO - 10.48550/arXiv.2211.06646
M3 - Working paper
BT - Efficient Speech Quality Assessment using Self-supervised Framewise Embeddings
ER -