An automatic differentiation system for the age of differential privacy
- URL: http://arxiv.org/abs/2109.10573v1
- Date: Wed, 22 Sep 2021 08:07:42 GMT
- Title: An automatic differentiation system for the age of differential privacy
- Authors: Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Daniel Rueckert,
Georgios Kaissis
- Abstract summary: Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
- Score: 65.35244647521989
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Tritium, an automatic differentiation-based sensitivity analysis
framework for differentially private (DP) machine learning (ML). Optimal noise
calibration in this setting requires efficient Jacobian matrix computations and
tight bounds on the L2-sensitivity. Our framework achieves these objectives by
relying on a functional analysis-based method for sensitivity tracking, which
we briefly outline. This approach interoperates naturally and seamlessly with
static graph-based automatic differentiation, which enables order-of-magnitude
improvements in compilation times compared to previous work. Moreover, we
demonstrate that optimising the sensitivity of the entire computational graph
at once yields substantially tighter estimates of the true sensitivity compared
to interval bound propagation techniques. Our work naturally befits recent
developments in DP such as individual privacy accounting, aiming to offer
improved privacy-utility trade-offs, and represents a step towards the
integration of accessible machine learning tooling with advanced privacy
accounting systems.
Related papers
- FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Online Sensitivity Optimization in Differentially Private Learning [8.12606646175019]
We present a novel approach to dynamically optimize the clipping threshold.
We treat this threshold as an additional learnable parameter, establishing a clean relationship between the threshold and the cost function.
Our method is thoroughly assessed against alternative fixed and adaptive strategies across diverse datasets, tasks, model dimensions, and privacy levels.
arXiv Detail & Related papers (2023-10-02T00:30:49Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Efficient Sensor Placement from Regression with Sparse Gaussian Processes in Continuous and Discrete Spaces [3.729242965449096]
The sensor placement problem is a common problem that arises when monitoring correlated phenomena.
We present a novel formulation to the SP problem based on variational approximation that can be optimized using gradient descent.
arXiv Detail & Related papers (2023-02-28T19:10:12Z) - Differentially Private Learning with Per-Sample Adaptive Clipping [8.401653565794353]
We propose a Differentially Private Per-Sample Adaptive Clipping (DP-PSAC) algorithm based on a non-monotonic adaptive weight function.
We show that DP-PSAC outperforms or matches the state-of-the-art methods on multiple main-stream vision and language tasks.
arXiv Detail & Related papers (2022-12-01T07:26:49Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Bring Your Own Algorithm for Optimal Differentially Private Stochastic
Minimax Optimization [44.52870407321633]
holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss.
We provide a general framework for solving differentially private minimax optimization (DP-SMO) problems.
Our framework is inspired from the recently proposed Phased-ERM method [20] for nonsmooth differentially private convex optimization (DP-SCO)
arXiv Detail & Related papers (2022-06-01T10:03:20Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.