Abstract
While recommender systems with multi-modal item representations (image, audio, and text), have been widely explored, learning recommendations from multi-modal user interactions (e.g., clicks and speech) remains an open problem. We study the case of multi-modal user interactions in a setting where users engage with a service provider through multiple channels (website and call center). In such cases, incomplete modalities naturally occur, since not all users interact through all the available channels. To address these challenges, we publish a real-world dataset that allows progress in this under-researched area. We further present and benchmark various methods for leveraging multi-modal user interactions for item recommendations, and propose a novel approach that specifically deals with missing modalities by mapping user interactions to a common feature space. Our analysis reveals important interactions between the different modalities and that a frequently occurring modality can enhance learning from a less frequent one.
Original language | English |
---|---|
Title of host publication | SIGIR 2024 - Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval |
Number of pages | 10 |
Publisher | Association for Computing Machinery, Inc. |
Publication date | 2024 |
Pages | 709-718 |
ISBN (Electronic) | 9798400704314 |
DOIs | |
Publication status | Published - 2024 |
Event | 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2024 - Washington, United States Duration: 14 Jul 2024 → 18 Jul 2024 |
Conference
Conference | 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2024 |
---|---|
Country/Territory | United States |
City | Washington |
Period | 14/07/2024 → 18/07/2024 |
Sponsor | ACM SIGIR |
Bibliographical note
Publisher Copyright:© 2024 Owner/Author.
Keywords
- missing modalities
- multi-modal user interactions
- recommender system