Differentially Private Sliced Inverse Regression: Minimax Optimality and
Algorithm
- URL: http://arxiv.org/abs/2401.08150v1
- Date: Tue, 16 Jan 2024 06:47:43 GMT
- Title: Differentially Private Sliced Inverse Regression: Minimax Optimality and
Algorithm
- Authors: Xintao Xia, Linjun Zhang, Zhanrui Cai
- Abstract summary: We propose optimally differentially private algorithms designed to address privacy concerns in the context of sufficient dimension reduction.
We develop differentially private algorithms that achieve the minimax lower bounds up to logarithmic factors.
As a natural extension, we can readily offer analogous lower and upper bounds for differentially private sparse principal component analysis.
- Score: 16.14032140601778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy preservation has become a critical concern in high-dimensional data
analysis due to the growing prevalence of data-driven applications. Proposed by
Li (1991), sliced inverse regression has emerged as a widely utilized
statistical technique for reducing covariate dimensionality while maintaining
sufficient statistical information. In this paper, we propose optimally
differentially private algorithms specifically designed to address privacy
concerns in the context of sufficient dimension reduction. We proceed to
establish lower bounds for differentially private sliced inverse regression in
both the low and high-dimensional settings. Moreover, we develop differentially
private algorithms that achieve the minimax lower bounds up to logarithmic
factors. Through a combination of simulations and real data analysis, we
illustrate the efficacy of these differentially private algorithms in
safeguarding privacy while preserving vital information within the reduced
dimension space. As a natural extension, we can readily offer analogous lower
and upper bounds for differentially private sparse principal component
analysis, a topic that may also be of potential interest to the statistical and
machine learning community.
Related papers
- Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.
Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.
We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - The Data Minimization Principle in Machine Learning [61.17813282782266]
Data minimization aims to reduce the amount of data collected, processed or retained.
It has been endorsed by various global data protection regulations.
However, its practical implementation remains a challenge due to the lack of a rigorous formulation.
arXiv Detail & Related papers (2024-05-29T19:40:27Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Differentially private sliced inverse regression in the federated
paradigm [3.539008590223188]
We extend Sliced inverse regression (SIR) to address the challenges of decentralized data, prioritizing privacy and communication efficiency.
Our approach, named as federated sliced inverse regression (FSIR), facilitates collaborative estimation of the sufficient dimension reduction subspace among multiple clients.
arXiv Detail & Related papers (2023-06-10T00:32:39Z) - Score Attack: A Lower Bound Technique for Optimal Differentially Private
Learning [8.760651633031342]
We propose a novel approach called the score attack, which provides a lower bound on the differential-privacy-constrained minimax risk of parameter estimation.
It can optimally lower bound the minimax risk of estimating unknown model parameters, up to a logarithmic factor, while ensuring differential privacy for a range of statistical problems.
arXiv Detail & Related papers (2023-03-13T14:26:27Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Designing Differentially Private Estimators in High Dimensions [0.0]
We study differentially private mean estimation in a high-dimensional setting.
Recent work in high-dimensional robust statistics has identified computationally tractable mean estimation algorithms.
arXiv Detail & Related papers (2020-06-02T21:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.