Private Linear Regression with Differential Privacy and PAC Privacy
- URL: http://arxiv.org/abs/2412.02578v1
- Date: Tue, 03 Dec 2024 17:04:14 GMT
- Title: Private Linear Regression with Differential Privacy and PAC Privacy
- Authors: Hillary Yang,
- Abstract summary: Most existing privacy-preserving linear regression methods rely on the well-established framework of differential privacy.<n> PAC Privacy has not yet been explored in this context.<n>We compare linear regression models trained with differential privacy and PAC privacy across three real-world datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linear regression is a fundamental tool for statistical analysis, which has motivated the development of linear regression methods that satisfy provable privacy guarantees so that the learned model reveals little about any one data point used to construct it. Most existing privacy-preserving linear regression methods rely on the well-established framework of differential privacy, while the newly proposed PAC Privacy has not yet been explored in this context. In this paper, we systematically compare linear regression models trained with differential privacy and PAC privacy across three real-world datasets, observing several key findings that impact the performance of privacy-preserving linear regression.
Related papers
- Differentially Private Inference for Longitudinal Linear Regression [9.16331221881594]
We develop a comprehensive framework for estimation and inference in longitudinal linear regression under user-level DP.<n>For inference, we develop a privatized estimator that is automatically heteroskedasticity- and autocorrelation-consistent.<n>These results provide the first unified framework for practical user-level DP estimation and inference.
arXiv Detail & Related papers (2026-01-15T17:47:02Z) - Private Sketches for Linear Regression [2.959915525479836]
We release differentially private sketches for the problems of least squares and absolute deviations regression.<n>The availability of these private sketches facilitates the application of commonly available solvers for regression.
arXiv Detail & Related papers (2025-11-10T18:22:40Z) - Differentially Private Two-Stage Gradient Descent for Instrumental Variable Regression [22.733602577854825]
We study instrumental variable regression (IVaR) under differential privacy constraints.<n>We propose a noisy two-state gradient descent algorithm that ensures differential privacy by injecting carefully calibrated noise into the gradient updates.
arXiv Detail & Related papers (2025-09-26T18:02:58Z) - High-Dimensional Differentially Private Quantile Regression: Distributed Estimation and Statistical Inference [0.26784722398800515]
We propose a differentially private quantile regression method for high-dimensional data in a distributed setting.<n>We develop a differentially private estimation algorithm with iterative updates, ensuring near-optimal statistical accuracy and formal privacy guarantees.
arXiv Detail & Related papers (2025-08-07T09:47:44Z) - How Private is Your Attention? Bridging Privacy with In-Context Learning [4.093582834469802]
In-context learning (ICL)-the ability of transformer-based models to perform new tasks from examples provided at inference time-has emerged as a hallmark of modern language models.
We propose a differentially private pretraining algorithm for linear attention heads and present the first theoretical analysis of the privacy-accuracy trade-off for ICL in linear regression.
arXiv Detail & Related papers (2025-04-22T16:05:26Z) - Multi-Objective Optimization for Privacy-Utility Balance in Differentially Private Federated Learning [12.278668095136098]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.
We propose an adaptive clipping mechanism that dynamically adjusts the clipping norm using a multi-objective optimization framework.
Our results show that adaptive clipping consistently outperforms fixed-clipping baselines, achieving improved accuracy under the same privacy constraints.
arXiv Detail & Related papers (2025-03-27T04:57:05Z) - Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.
Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.
We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Differentially Private Deep Model-Based Reinforcement Learning [47.651861502104715]
We introduce PriMORL, a model-based RL algorithm with formal differential privacy guarantees.
PriMORL learns an ensemble of trajectory-level DP models of the environment from offline data.
arXiv Detail & Related papers (2024-02-08T10:05:11Z) - Differentially Private Sliced Inverse Regression: Minimax Optimality and
Algorithm [16.14032140601778]
We propose optimally differentially private algorithms designed to address privacy concerns in the context of sufficient dimension reduction.
We develop differentially private algorithms that achieve the minimax lower bounds up to logarithmic factors.
As a natural extension, we can readily offer analogous lower and upper bounds for differentially private sparse principal component analysis.
arXiv Detail & Related papers (2024-01-16T06:47:43Z) - Improving the Privacy and Practicality of Objective Perturbation for
Differentially Private Linear Learners [21.162924003105484]
This paper revamps the objective perturbation mechanism with tighter privacy analyses and new computational tools.
DP-SGD requires a non-trivial privacy overhead and a computational complexity which might be extravagant for simple models such as linear and logistic regression.
arXiv Detail & Related papers (2023-12-31T20:32:30Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - On the utility and protection of optimization with differential privacy
and classic regularization techniques [9.413131350284083]
We study the effectiveness of the differentially-private descent (DP-SGD) algorithm against standard optimization practices with regularization techniques.
We discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
arXiv Detail & Related papers (2022-09-07T14:10:21Z) - Analyzing the Differentially Private Theil-Sen Estimator for Simple Linear Regression [0.9208007322096533]
We provide a rigorous, finite-sample analysis of DPTheilSen's privacy and accuracy properties.
We show how to produce differentially private confidence intervals to accompany its point estimates.
arXiv Detail & Related papers (2022-07-27T04:38:37Z) - PEARL: Data Synthesis via Private Embeddings and Adversarial
Reconstruction Learning [1.8692254863855962]
We propose a new framework of data using deep generative models in a differentially private manner.
Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion.
Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy.
arXiv Detail & Related papers (2021-06-08T18:00:01Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.