Abstract
Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for ε
-differential privacy, where ε>1
is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε
-differential privacy and has expected error that decreases exponentially with ε
.
-differential privacy, where ε>1
is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε
-differential privacy and has expected error that decreases exponentially with ε
.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of The 33rd International Conference on Algorithmic Learning Theory |
Forlag | PMLR |
Publikationsdato | 2022 |
Sider | 881-909 |
Status | Udgivet - 2022 |
Begivenhed | 33rd International Conference on Algorithmic Learning Theory (ALT 2022) - Paris, Frankrig Varighed: 29 mar. 2022 → 1 apr. 2022 |
Konference
Konference | 33rd International Conference on Algorithmic Learning Theory (ALT 2022) |
---|---|
Land/Område | Frankrig |
By | Paris |
Periode | 29/03/2022 → 01/04/2022 |
Navn | Proceedings of Machine Learning Research |
---|---|
Vol/bind | 167 |
ISSN | 2640-3498 |