Privacy-Aware Sequential Learning
- URL: http://arxiv.org/abs/2502.19525v5
- Date: Mon, 29 Sep 2025 18:31:43 GMT
- Title: Privacy-Aware Sequential Learning
- Authors: Yuxin Liu, M. Amin Rahimian,
- Abstract summary: In vaccination registries, individuals act after observing others, and the resulting public records can expose private information.<n>We study privacy-preserving sequential learning, where agents add endogenous noise to their reported actions to conceal private signals.<n>Our results show how privacy reshapes information dynamics and inform the design of platforms and policies.
- Score: 1.9573768098158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In settings like vaccination registries, individuals act after observing others, and the resulting public records can expose private information. We study privacy-preserving sequential learning, where agents add endogenous noise to their reported actions to conceal private signals. Efficient social learning relies on information flow, seemingly in conflict with privacy. Surprisingly, with continuous signals and a fixed privacy budget $(\epsilon)$, the optimal randomization strategy balances privacy and accuracy, accelerating learning to $\Theta_{\epsilon}(\log n)$, faster than the nonprivate $\Theta(\sqrt{\log n})$ rate. In the nonprivate baseline, the expected time to the first correct action and the number of incorrect actions diverge; under privacy with sufficiently small $\epsilon$, both are finite. Privacy helps because, under the false state, agents more often receive signals contradicting the majority; randomization then asymmetrically amplifies the log-likelihood ratio, enhancing aggregation. In heterogeneous populations, an order-optimal $\Theta(\sqrt{n})$ rate is achievable when a subset of agents have low privacy budgets. With binary signals, however, privacy reduces informativeness and impairs learning relative to the nonprivate baseline, though the dependence on $\epsilon$ is nonmonotone. Our results show how privacy reshapes information dynamics and inform the design of platforms and policies.
Related papers
- Keeping a Secret Requires a Good Memory: Space Lower-Bounds for Private Algorithms [67.94856074923571]
This paper introduces a novel proof technique based on a multi-player communication game.<n>We show that winning this communication game requires transmitting information proportional to the number of over-active users.<n>We show that this communication-theoretic technique generalizes to broad classes of problems, yielding lower bounds for private medians, quantiles, and max-select.
arXiv Detail & Related papers (2026-02-12T17:49:07Z) - Privacy-Utility Tradeoffs in Quantum Information Processing [13.088625380700933]
We study optimal tradeoffs for both generic and application-specific utility metrics when privacy is quantified by $(varepsilon,)$-quantum local differential privacy.<n>We derive a lower bound on the number of samples required to achieve a fixed accuracy guarantee with high probability.<n>We conclude by initiating the study of private classical shadows, which promise useful applications for private learning tasks.
arXiv Detail & Related papers (2026-02-11T04:21:45Z) - Differentially Private Distributed Inference [2.4401219403555814]
Healthcare centers collaborating on clinical trials must balance knowledge sharing with safeguarding sensitive patient data.<n>We address this challenge by using differential privacy (DP) to control information leakage.<n>Agents update belief statistics via log-linear rules, and DP noise provides plausible deniability and rigorous performance guarantees.
arXiv Detail & Related papers (2024-02-13T01:38:01Z) - Private Fine-tuning of Large Language Models with Zeroth-order Optimization [51.19403058739522]
Differentially private gradient descent (DP-SGD) allows models to be trained in a privacy-preserving manner.<n>We introduce DP-ZO, a private fine-tuning framework for large language models by privatizing zeroth order optimization methods.
arXiv Detail & Related papers (2024-01-09T03:53:59Z) - Differentially Private Secure Multiplication: Hiding Information in the Rubble of Noise [7.110450972801578]
We consider the problem of private distributed multi-party multiplication.<n>It is well-established that Shamir secret-sharing coding strategies can enable perfect information-theoretic privacy in distributed computation.
arXiv Detail & Related papers (2023-09-28T02:13:13Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Improved Regret for Differentially Private Exploration in Linear MDP [31.567811502343552]
We study privacy-preserving exploration in sequential decision-making for environments that rely on sensitive data such as medical records.
We provide a private algorithm with an improved regret rate with an optimal dependence of $O(sqrtK)$ on the number of episodes.
arXiv Detail & Related papers (2022-02-02T21:32:09Z) - Quantifying identifiability to choose and audit $\epsilon$ in
differentially private deep learning [15.294433619347082]
To use differential privacy in machine learning, data scientists must choose privacy parameters $(epsilon,delta)$.
We transform $(epsilon,delta)$ to a bound on the Bayesian posterior belief of the adversary assumed by differential privacy concerning the presence of any record in the training dataset.
We formulate an implementation of this differential privacy adversary that allows data scientists to audit model training and compute empirical identifiability scores and empirical $(epsilon,delta)$.
arXiv Detail & Related papers (2021-03-04T09:35:58Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Connecting Robust Shuffle Privacy and Pan-Privacy [11.367579037903734]
In the emphshuffle model of differential privacy, data-holding users send randomized messages to a secure shuffler, and the shuffler permutes the messages.
In the emphpan-private model, an algorithm processes a stream of data while maintaining an internal state that is differentially private with regard to the stream data.
Our results focus on emphrobustly shuffle private protocols, whose privacy guarantees are not greatly affected by malicious users.
arXiv Detail & Related papers (2020-04-20T17:58:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.