Multimodal Detection and Classification of Head Movements in Face-to-Face Conversations: Exploring Models, Features and Their Interaction

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

12 Downloads (Pure)

Abstract

In this work we perform multimodal detection and classification
of head movements from face to face video conversation data.
We have experimented with different models and feature sets
and provided some insight on the effect of independent features,
but also how their interaction can enhance a head movement
classifier. Used features include nose, neck and mid hip position
coordinates and their derivatives together with acoustic features,
namely, intensity and pitch of the speaker on focus. Results
show that when input features are sufficiently processed by in-
teracting with each other, a linear classifier can reach a similar
performance to a more complex non-linear neural model with
several hidden layers. Our best models achieve state-of-the-art
performance in the detection task, measured by macro-averaged
F1 score.
Original languageEnglish
Title of host publicationGesture and Speech in Interaction (GESPIN 2023)
Place of PublicationNijmegen
PublisherMax Planck Institut for Psycholinguistics
Publication date2023
Publication statusPublished - 2023

Cite this