Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models

Phillip Rust*, Anders Søgaard

*Corresponding author af dette arbejde

Publikation: Bidrag til tidsskriftKonferenceartikelForskningpeer review

1 Citationer (Scopus)
15 Downloads (Pure)

Abstract

Language models such as mBERT, XLM-R, and BLOOM aim to achieve multilingual generalization or compression to facilitate transfer to a large number of (potentially unseen) languages. However, these models should ideally also be private, linguistically fair, and transparent, by relating their predictions to training data. Can these requirements be simultaneously satisfied? We show that multilingual compression and linguistic fairness are compatible with differential privacy, but that differential privacy is at odds with training data influence sparsity, an objective for transparency. We further present a series of experiments on two common NLP tasks and evaluate multilingual compression and training data influence sparsity under different privacy guarantees, exploring these trade-offs in more detail. Our results suggest that we need to develop ways to jointly optimize for these objectives in order to find practical trade-offs.

OriginalsprogEngelsk
TidsskriftProceedings of Machine Learning Research
Vol/bind202
Sider (fra-til)29354-29387
ISSN2640-3498
StatusUdgivet - 2023
Begivenhed40th International Conference on Machine Learning, ICML 2023 - Honolulu, USA
Varighed: 23 jul. 202329 jul. 2023

Konference

Konference40th International Conference on Machine Learning, ICML 2023
Land/OmrådeUSA
ByHonolulu
Periode23/07/202329/07/2023

Bibliografisk note

Funding Information:
We thank the anonymous reviewers and members of the CoAStaL group for their helpful feedback and suggestions. Phillip Rust is funded by the Novo Nordisk Foundation (grant NNF 20SA0066568).

Publisher Copyright:
© 2023 Proceedings of Machine Learning Research. All rights reserved.

Citationsformater