Visual Prediction Improves Zero-Shot Cross-Modal Machine Translation

Tosho Hirasawa, Emanuele Bugliarello, Desmond Elliott, Mamoru Komachi

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

7 Downloads (Pure)

Abstract

Multimodal machine translation (MMT) systems have been successfully developed in recent years for a few language pairs. However, training such models usually requires tuples of a source language text, target language text, and images. Obtaining these data involves expensive human annotations, making it difficult to develop models for unseen text-only language pairs. In this work, we propose the task of zero-shot cross-modal machine translation aiming to transfer multimodal knowledge from an existing multimodal parallel corpus into a new translation direction. We also introduce a novel MMT model with a visual prediction network to learn visual features grounded on multimodal parallel data and provide pseudo-features for text-only language pairs. With this training paradigm, our MMT model outperforms its text-only counterpart. In our extensive analyses, we show that (i) the selection of visual features is important, and (ii) training on image-aware translations and being grounded on a similar language pair are mandatory. Our code are available at https://github.com/toshohirasawa/zeroshot-crossmodal-mt.

OriginalsprogEngelsk
TitelProceedings of the 8th Conference on Machine Translation, WMT 2023
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2023
Sider520-533
ISBN (Elektronisk)9798891760417
DOI
StatusUdgivet - 2023
Begivenhed8th Conference on Machine Translation, WMT 2023 - Singapore, Singapore
Varighed: 6 dec. 20237 dec. 2023

Konference

Konference8th Conference on Machine Translation, WMT 2023
Land/OmrådeSingapore
BySingapore
Periode06/12/202307/12/2023

Bibliografisk note

Publisher Copyright:
© 2023 Association for Computational Linguistics.

Citationsformater