Differentially Private Inference for Longitudinal Linear Regression
- URL: http://arxiv.org/abs/2601.10626v1
- Date: Thu, 15 Jan 2026 17:47:02 GMT
- Title: Differentially Private Inference for Longitudinal Linear Regression
- Authors: Getoar Sopa, Marco Avella Medina, Cynthia Rush,
- Abstract summary: We develop a comprehensive framework for estimation and inference in longitudinal linear regression under user-level DP.<n>For inference, we develop a privatized estimator that is automatically heteroskedasticity- and autocorrelation-consistent.<n>These results provide the first unified framework for practical user-level DP estimation and inference.
- Score: 9.16331221881594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential Privacy (DP) provides a rigorous framework for releasing statistics while protecting individual information present in a dataset. Although substantial progress has been made on differentially private linear regression, existing methods almost exclusively address the item-level DP setting, where each user contributes a single observation. Many scientific and economic applications instead involve longitudinal or panel data, in which each user contributes multiple dependent observations. In these settings, item-level DP offers inadequate protection, and user-level DP - shielding an individual's entire trajectory - is the appropriate privacy notion. We develop a comprehensive framework for estimation and inference in longitudinal linear regression under user-level DP. We propose a user-level private regression estimator based on aggregating local regressions, and we establish finite-sample guarantees and asymptotic normality under short-range dependence. For inference, we develop a privatized, bias-corrected covariance estimator that is automatically heteroskedasticity- and autocorrelation-consistent. These results provide the first unified framework for practical user-level DP estimation and inference in longitudinal linear regression under dependence, with strong theoretical guarantees and promising empirical performance.
Related papers
- P-GenRM: Personalized Generative Reward Model with Test-time User-based Scaling [66.55381105691818]
We propose P-GenRM, the first Personalized Generative Reward Model with test-time user-based scaling.<n>P-GenRM transforms preference signals into structured evaluation chains that derive adaptive personas and scoring rubrics.<n>It further clusters users into User Prototypes and introduces a dual-granularity scaling mechanism.
arXiv Detail & Related papers (2026-02-12T16:07:22Z) - Differential privacy with dependent data [1.8835490533310795]
We show that Winsorized mean estimators can be used under dependence for bounded data.<n>We formalize dependence via log-Sobolev inequalities on the joint unbounded observations.<n>Our work constitutes a first step towards a systematic study of Differential Privacy (DP) for dependent data.
arXiv Detail & Related papers (2025-11-23T18:56:40Z) - High-Dimensional Differentially Private Quantile Regression: Distributed Estimation and Statistical Inference [0.26784722398800515]
We propose a differentially private quantile regression method for high-dimensional data in a distributed setting.<n>We develop a differentially private estimation algorithm with iterative updates, ensuring near-optimal statistical accuracy and formal privacy guarantees.
arXiv Detail & Related papers (2025-08-07T09:47:44Z) - Machine Learning with Privacy for Protected Attributes [56.44253915927481]
We refine the definition of differential privacy (DP) to create a more general and flexible framework that we call feature differential privacy (FDP)<n>Our definition is simulation-based and allows for both addition/removal and replacement variants of privacy, and can handle arbitrary separation of protected and non-protected features.<n>We apply our framework to various machine learning tasks and show that it can significantly improve the utility of DP-trained models when public features are available.
arXiv Detail & Related papers (2025-06-24T17:53:28Z) - Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.<n>Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.<n>We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Private Linear Regression with Differential Privacy and PAC Privacy [0.0]
Most existing privacy-preserving linear regression methods rely on the well-established framework of differential privacy.<n> PAC Privacy has not yet been explored in this context.<n>We compare linear regression models trained with differential privacy and PAC privacy across three real-world datasets.
arXiv Detail & Related papers (2024-12-03T17:04:14Z) - Geometry-Aware Instrumental Variable Regression [56.16884466478886]
We propose a transport-based IV estimator that takes into account the geometry of the data manifold through data-derivative information.
We provide a simple plug-and-play implementation of our method that performs on par with related estimators in standard settings.
arXiv Detail & Related papers (2024-05-19T17:49:33Z) - Differentially Private Statistical Inference through $\eta$-Divergence
One Posterior Sampling [2.8544822698499255]
We propose a posterior sampling scheme from a generalised posterior targeting the minimisation of the $beta$-divergence between the model and the data generating process.
This provides private estimation that is generally applicable without requiring changes to the underlying model.
We show that $beta$D-Bayes produces more precise inference estimation for the same privacy guarantees.
arXiv Detail & Related papers (2023-07-11T12:00:15Z) - Semantic Self-adaptation: Enhancing Generalization with a Single Sample [45.111358665370524]
We propose a self-adaptive approach for semantic segmentation.
It fine-tunes the parameters of convolutional layers to the input image using consistency regularization.
Our empirical study suggests that self-adaptation may complement the established practice of model regularization at training time.
arXiv Detail & Related papers (2022-08-10T12:29:01Z) - Differentially Private Estimation via Statistical Depth [0.0]
Two notions of statistical depth are used to motivate new approximate DP location and regression estimators.
To avoid requiring that users specify a priori bounds on the estimates and/or the observations, variants of these DP mechanisms are described.
arXiv Detail & Related papers (2022-07-26T01:59:07Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Post-Contextual-Bandit Inference [57.88785630755165]
Contextual bandit algorithms are increasingly replacing non-adaptive A/B tests in e-commerce, healthcare, and policymaking.
They can both improve outcomes for study participants and increase the chance of identifying good or even best policies.
To support credible inference on novel interventions at the end of the study, we still want to construct valid confidence intervals on average treatment effects, subgroup effects, or value of new policies.
arXiv Detail & Related papers (2021-06-01T12:01:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.