Abstract
This work shows that competitive translation results can be obtained in a constrained setting by incorporating the latest advances in memory and compute optimization. We train and evaluate large multilingual translation models using a single GPU for a maximum of 100 hours and get within 4-5 BLEU points of the top submission on the leaderboard. We also benchmark standard baselines on the PMI corpus and re-discover well-known shortcomings of translation systems and metrics.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 8th Workshop on Asian Translation (WAT2021) |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2021 |
Sider | 205-211 |
DOI | |
Status | Udgivet - 2021 |
Begivenhed | 8th Workshop on Asian Translation (WAT2021) - Online Varighed: 5 aug. 2021 → 6 aug. 2021 |
Konference
Konference | 8th Workshop on Asian Translation (WAT2021) |
---|---|
By | Online |
Periode | 05/08/2021 → 06/08/2021 |