Robust Testing and Estimation under Manipulation Attacks
- URL: http://arxiv.org/abs/2104.10740v1
- Date: Wed, 21 Apr 2021 19:49:49 GMT
- Title: Robust Testing and Estimation under Manipulation Attacks
- Authors: Jayadev Acharya, Ziteng Sun, Huanyu Zhang
- Abstract summary: We study robust testing and estimation of discrete distributions in the strong contamination model.
We consider both the "centralized setting" and the "distributed setting with information constraints"
- Score: 32.95545820578349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study robust testing and estimation of discrete distributions in the
strong contamination model. We consider both the "centralized setting" and the
"distributed setting with information constraints" including communication and
local privacy (LDP) constraints. Our technique relates the strength of
manipulation attacks to the earth-mover distance using Hamming distance as the
metric between messages(samples) from the users. In the centralized setting, we
provide optimal error bounds for both learning and testing. Our lower bounds
under local information constraints build on the recent lower bound methods in
distributed inference. In the communication constrained setting, we develop
novel algorithms based on random hashing and an $\ell_1/\ell_1$ isometry.
Related papers
- Sketches-based join size estimation under local differential privacy [3.0945730947183203]
Join size estimation on sensitive data poses a risk of privacy leakage.
Local differential privacy (LDP) is a solution to preserve privacy while collecting sensitive data.
We introduce a novel algorithm called LDPJoinSketch for sketch-based join size estimation under LDP.
arXiv Detail & Related papers (2024-05-19T01:21:54Z) - Distributed Multi-Task Learning for Stochastic Bandits with Context Distribution and Stage-wise Constraints [0.0]
We propose a distributed upper confidence bound (UCB) algorithm, related-UCB.
Our algorithm constructs a pruned action set during each round to ensure the constraints are met.
We empirically validated the performance of our algorithm on synthetic data and real-world Movielens-100K data.
arXiv Detail & Related papers (2024-01-21T18:43:55Z) - Sparse Feature Selection Makes Batch Reinforcement Learning More Sample
Efficient [62.24615324523435]
This paper provides a statistical analysis of high-dimensional batch Reinforcement Learning (RL) using sparse linear function approximation.
When there is a large number of candidate features, our result sheds light on the fact that sparsity-aware methods can make batch RL more sample efficient.
arXiv Detail & Related papers (2020-11-08T16:48:02Z) - Estimating Sparse Discrete Distributions Under Local Privacy and
Communication Constraints [46.944178305032146]
We consider the problem of estimating sparse discrete distributions under local differential privacy (LDP) and communication constraints.
We characterize the sample complexity for sparse estimation under LDP constraints up to a constant factor and the sample complexity under communication constraints up to a logarithmic factor.
arXiv Detail & Related papers (2020-10-30T20:06:35Z) - Unified lower bounds for interactive high-dimensional estimation under
information constraints [40.339506154827106]
We provide a unified framework enabling us to derive a variety of (tight) minimax lower bounds for different parametric families of distributions.
Our lower bound framework is versatile and yields "plug-and-play" bounds that are widely applicable to a large range of estimation problems.
arXiv Detail & Related papers (2020-10-13T17:25:19Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Minimax optimal goodness-of-fit testing for densities and multinomials
under a local differential privacy constraint [3.265773263570237]
We consider the consequences of local differential privacy constraints on goodness-of-fit testing.
We present a test that is adaptive to the smoothness parameter of the unknown density and remains minimax optimal up to a logarithmic factor.
arXiv Detail & Related papers (2020-02-11T08:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.