On Mitigating the Utility-Loss in Differentially Private Learning: A new
Perspective by a Geometrically Inspired Kernel Approach
- URL: http://arxiv.org/abs/2304.01300v4
- Date: Wed, 7 Feb 2024 12:20:13 GMT
- Title: On Mitigating the Utility-Loss in Differentially Private Learning: A new
Perspective by a Geometrically Inspired Kernel Approach
- Authors: Mohit Kumar, Bernhard A. Moser, Lukas Fischer
- Abstract summary: This paper introduces a geometrically inspired kernel-based approach to mitigate the accuracy-loss issue in classification.
A representation of the affine hull of given data points is learned in Reproducing Kernel Hilbert Spaces (RKHS)
The effectiveness of the approach is demonstrated through experiments on MNIST dataset, Freiburg groceries dataset, and a real biomedical dataset.
- Score: 2.4253452809863116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy-utility tradeoff remains as one of the fundamental issues of
differentially private machine learning. This paper introduces a geometrically
inspired kernel-based approach to mitigate the accuracy-loss issue in
classification. In this approach, a representation of the affine hull of given
data points is learned in Reproducing Kernel Hilbert Spaces (RKHS). This leads
to a novel distance measure that hides privacy-sensitive information about
individual data points and improves the privacy-utility tradeoff via
significantly reducing the risk of membership inference attacks. The
effectiveness of the approach is demonstrated through experiments on MNIST
dataset, Freiburg groceries dataset, and a real biomedical dataset. It is
verified that the approach remains computationally practical. The application
of the approach to federated learning is considered and it is observed that the
accuracy-loss due to data being distributed is either marginal or not
significantly high.
Related papers
- Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Locally Differentially Private Gradient Tracking for Distributed Online
Learning over Directed Graphs [2.1271873498506038]
We propose a locally differentially private gradient tracking based distributed online learning algorithm.
We prove that the proposed algorithm converges in mean square to the exact optimal solution while ensuring rigorous local differential privacy.
arXiv Detail & Related papers (2023-10-24T18:15:25Z) - Locally Differentially Private Distributed Online Learning with Guaranteed Optimality [1.800614371653704]
This paper proposes an approach that ensures both differential privacy and learning accuracy in distributed online learning.
While ensuring a diminishing expected instantaneous regret, the approach can simultaneously ensure a finite cumulative privacy budget.
To the best of our knowledge, this is the first algorithm that successfully ensures both rigorous local differential privacy and learning accuracy.
arXiv Detail & Related papers (2023-06-25T02:05:34Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Gromov-Wasserstein Discrepancy with Local Differential Privacy for
Distributed Structural Graphs [7.4398547397969494]
We propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks.
Our experiments show that, with strong privacy protections guaranteed by the $varilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance.
arXiv Detail & Related papers (2022-02-01T23:32:33Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.