One-bit Submission for Locally Private Quasi-MLE: Its Asymptotic
Normality and Limitation
- URL: http://arxiv.org/abs/2202.07194v1
- Date: Tue, 15 Feb 2022 05:04:59 GMT
- Title: One-bit Submission for Locally Private Quasi-MLE: Its Asymptotic
Normality and Limitation
- Authors: Hajime Ono, Kazuhiro Minami, Hideitsu Hino
- Abstract summary: Local differential privacy(LDP) is an information-theoretic privacy definition suitable for statistical surveys that involve an untrusted data curator.
The existing method to build LDP QMLE is difficult to implement for a large-scale survey system in the real world due to long waiting time, expensive communication cost, and the boundedness assumption of derivative of a log-likelihood function.
We provided an alternative LDP protocol without those issues, which is potentially much easily deployable to a large-scale survey.
- Score: 3.050919759387985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Local differential privacy~(LDP) is an information-theoretic privacy
definition suitable for statistical surveys that involve an untrusted data
curator. An LDP version of quasi-maximum likelihood estimator~(QMLE) has been
developed, but the existing method to build LDP QMLE is difficult to implement
for a large-scale survey system in the real world due to long waiting time,
expensive communication cost, and the boundedness assumption of derivative of a
log-likelihood function. We provided an alternative LDP protocol without those
issues, which is potentially much easily deployable to a large-scale survey. We
also provided sufficient conditions for the consistency and asymptotic
normality and limitations of our protocol. Our protocol is less burdensome for
the users, and the theoretical guarantees cover more realistic cases than those
for the existing method.
Related papers
- Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - API Is Enough: Conformal Prediction for Large Language Models Without Logit-Access [5.922444371605447]
This study aims to address the pervasive challenge of quantifying uncertainty in large language models (LLMs) without logit-access.
Existing Conformal Prediction (CP) methods for LLMs typically assume access to the logits, which are unavailable for some API-only LLMs.
We introduce a novel CP method that is tailored for API-only LLMs without logit-access; (2) minimizes the size of prediction sets; and (3) ensures a statistical guarantee of the user-defined coverage.
arXiv Detail & Related papers (2024-03-02T14:14:45Z) - Provable Membership Inference Privacy [31.08016816475564]
Differential privacy (DP) has emerged as one canonical standard for provable privacy.
We propose a novel privacy notion, membership inference privacy (MIP), to address these challenges.
We show MIP can be achieved using less amount of randomness compared to the amount required for guaranteeing DP, leading to a smaller drop in utility.
arXiv Detail & Related papers (2022-11-12T06:13:00Z) - Nearly Optimal Latent State Decoding in Block MDPs [74.51224067640717]
In episodic Block MDPs, the decision maker has access to rich observations or contexts generated from a small number of latent states.
We are first interested in estimating the latent state decoding function based on data generated under a fixed behavior policy.
We then study the problem of learning near-optimal policies in the reward-free framework.
arXiv Detail & Related papers (2022-08-17T18:49:53Z) - Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions [49.726408540784334]
Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
arXiv Detail & Related papers (2022-07-10T04:25:02Z) - PAC Statistical Model Checking of Mean Payoff in Discrete- and
Continuous-Time MDP [0.34410212782758043]
We provide the first algorithm to compute mean payoff probably approximately correctly in unknown MDP.
We do not require any knowledge of the state space, only a lower bound on the minimum transition probability.
In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
arXiv Detail & Related papers (2022-06-03T09:13:27Z) - Under-Approximating Expected Total Rewards in POMDPs [68.8204255655161]
We consider the optimal expected total reward to reach a goal state in a partially observable Markov decision process (POMDP)
We use mixed-integer linear programming (MILP) to find such minimal probability shifts and experimentally show that our techniques scale quite well.
arXiv Detail & Related papers (2022-01-21T16:43:03Z) - RL for Latent MDPs: Regret Guarantees and a Lower Bound [74.41782017817808]
We consider the regret problem for reinforcement learning in latent Markov Decision Processes (LMDP)
In an LMDP, an MDP is randomly drawn from a set of $M$ possible MDPs at the beginning of the interaction, but the identity of the chosen MDP is not revealed to the agent.
We show that the key link is a notion of separation between the MDP system dynamics.
arXiv Detail & Related papers (2021-02-09T16:49:58Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Estimating Sparse Discrete Distributions Under Local Privacy and
Communication Constraints [46.944178305032146]
We consider the problem of estimating sparse discrete distributions under local differential privacy (LDP) and communication constraints.
We characterize the sample complexity for sparse estimation under LDP constraints up to a constant factor and the sample complexity under communication constraints up to a logarithmic factor.
arXiv Detail & Related papers (2020-10-30T20:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.