Unified lower bounds for interactive high-dimensional estimation under
information constraints
- URL: http://arxiv.org/abs/2010.06562v5
- Date: Thu, 26 Aug 2021 17:57:31 GMT
- Title: Unified lower bounds for interactive high-dimensional estimation under
information constraints
- Authors: Jayadev Acharya, Cl\'ement L. Canonne, Ziteng Sun, and Himanshu Tyagi
- Abstract summary: We provide a unified framework enabling us to derive a variety of (tight) minimax lower bounds for different parametric families of distributions.
Our lower bound framework is versatile and yields "plug-and-play" bounds that are widely applicable to a large range of estimation problems.
- Score: 40.339506154827106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the task of distributed parameter estimation using interactive
protocols subject to local information constraints such as bandwidth
limitations, local differential privacy, and restricted measurements. We
provide a unified framework enabling us to derive a variety of (tight) minimax
lower bounds for different parametric families of distributions, both
continuous and discrete, under any $\ell_p$ loss. Our lower bound framework is
versatile and yields "plug-and-play" bounds that are widely applicable to a
large range of estimation problems. In particular, our approach recovers bounds
obtained using data processing inequalities and Cram\'er--Rao bounds, two other
alternative approaches for proving lower bounds in our setting of interest.
Further, for the families considered, we complement our lower bounds with
matching upper bounds.
Related papers
- Learning to Explore with Lagrangians for Bandits under Unknown Linear Constraints [8.784438985280094]
We study problems as pure exploration in multi-armed bandits with unknown linear constraints.
First, we propose a Lagrangian relaxation of the sample complexity lower bound for pure exploration under constraints.
Second, we leverage the Lagrangian lower bound and the properties of convex to propose two computationally efficient extensions of Track-and-Stop and Gamified Explorer, namely LATS and LAGEX.
arXiv Detail & Related papers (2024-10-24T15:26:14Z) - Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound Framework and Characterization for Bandit Learnability [71.82666334363174]
We develop a unifying framework for information-theoretic lower bound in statistical estimation and interactive decision making.
We propose a new lower bound approach called emphinteractive Fano methodinteractive.
arXiv Detail & Related papers (2024-10-07T15:14:58Z) - Regret Lower Bounds in Multi-agent Multi-armed Bandit [14.822625665220068]
Multi-armed Bandit motivates methods with provable upper bounds on regret.
We provide the first comprehensive study on regret lower bounds across different settings.
arXiv Detail & Related papers (2023-08-15T21:20:24Z) - Budget-Constrained Bounds for Mini-Batch Estimation of Optimal Transport [35.440243358517066]
We introduce novel families of upper and lower bounds for the Optimal Transport problem constructed by aggregating solutions of mini-batch OT problems.
The upper bound family contains traditional mini-batch averaging at one extreme and a tight bound found by optimal coupling of mini-batches at the other.
Through various experiments, we explore the trade-off between computational budget and bound tightness and show the usefulness of these bounds in computer vision applications.
arXiv Detail & Related papers (2022-10-24T22:12:17Z) - Estimating Sparse Discrete Distributions Under Local Privacy and
Communication Constraints [46.944178305032146]
We consider the problem of estimating sparse discrete distributions under local differential privacy (LDP) and communication constraints.
We characterize the sample complexity for sparse estimation under LDP constraints up to a constant factor and the sample complexity under communication constraints up to a logarithmic factor.
arXiv Detail & Related papers (2020-10-30T20:06:35Z) - Learning, compression, and leakage: Minimising classification error via
meta-universal compression principles [87.054014983402]
A promising group of compression techniques for learning scenarios is normalised maximum likelihood (NML) coding.
Here we consider a NML-based decision strategy for supervised classification problems, and show that it attains PAC learning when applied to a wide variety of models.
We show that the misclassification rate of our method is upper bounded by the maximal leakage, a recently proposed metric to quantify the potential of data leakage in privacy-sensitive scenarios.
arXiv Detail & Related papers (2020-10-14T20:03:58Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Relative Deviation Margin Bounds [55.22251993239944]
We give two types of learning bounds, both distribution-dependent and valid for general families, in terms of the Rademacher complexity.
We derive distribution-dependent generalization bounds for unbounded loss functions under the assumption of a finite moment.
arXiv Detail & Related papers (2020-06-26T12:37:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.