Abstract: Federated learning (FL) allows to train a massive amount of data privately
due to its decentralized structure. Stochastic gradient descent (SGD) is
commonly used for FL due to its good empirical performance, but sensitive user
information can still be inferred from weight updates shared during FL
iterations. We consider Gaussian mechanisms to preserve local differential
privacy (LDP) of user data in the FL model with SGD. The trade-offs between
user privacy, global utility, and transmission rate are proved by defining
appropriate metrics for FL with LDP. Compared to existing results, the query
sensitivity used in LDP is defined as a variable and a tighter privacy
accounting method is applied. The proposed utility bound allows heterogeneous
parameters over all users. Our bounds characterize how much utility decreases
and transmission rate increases if a stronger privacy regime is targeted.
Furthermore, given a target privacy level, our results guarantee a
significantly larger utility and a smaller transmission rate as compared to
existing privacy accounting methods.