Over-the-Air Statistical Estimation
- URL: http://arxiv.org/abs/2103.04014v1
- Date: Sat, 6 Mar 2021 03:07:22 GMT
- Title: Over-the-Air Statistical Estimation
- Authors: Chuan-Zheng Lee, Leighton Pate Barnes and Ayfer Ozgur
- Abstract summary: We study schemes and lower bounds for distributed minimax statistical estimation over a Gaussian multiple-access channel (MAC) under squared error loss.
We show that estimation schemes that leverage the physical layer offer a drastic reduction in estimation error over digital schemes relying on a physical-layer abstraction.
- Score: 4.082216579462796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study schemes and lower bounds for distributed minimax statistical
estimation over a Gaussian multiple-access channel (MAC) under squared error
loss, in a framework combining statistical estimation and wireless
communication. First, we develop "analog" joint estimation-communication
schemes that exploit the superposition property of the Gaussian MAC and we
characterize their risk in terms of the number of nodes and dimension of the
parameter space. Then, we derive information-theoretic lower bounds on the
minimax risk of any estimation scheme restricted to communicate the samples
over a given number of uses of the channel and show that the risk achieved by
our proposed schemes is within a logarithmic factor of these lower bounds. We
compare both achievability and lower bound results to previous "digital" lower
bounds, where nodes transmit errorless bits at the Shannon capacity of the MAC,
showing that estimation schemes that leverage the physical layer offer a
drastic reduction in estimation error over digital schemes relying on a
physical-layer abstraction.
Related papers
- High-probability minimax lower bounds [2.5993680263955947]
We introduce the notion of a minimax quantile, and seek to articulate its dependence on the quantile level.
We develop high-probability variants of the classical Le Cam and Fano methods, as well as a technique to convert local minimax risk lower bounds to lower bounds on minimax quantiles.
arXiv Detail & Related papers (2024-06-19T11:15:01Z) - Optimal Estimation and Computational Limit of Low-rank Gaussian Mixtures [12.868722327487752]
We propose a low-rank Gaussian mixture model (LrMM) assuming each matrix-valued observation has a planted low-rank structure.
We prove the minimax optimality of a maximum likelihood estimator which, in general, is computationally infeasible.
Our results reveal multiple phase transitions in the minimax error rates and the statistical-to-computational gap.
arXiv Detail & Related papers (2022-01-22T12:43:25Z) - On the Minimal Adversarial Perturbation for Deep Neural Networks with
Provable Estimation Error [65.51757376525798]
The existence of adversarial perturbations has opened an interesting research line on provable robustness.
No provable results have been presented to estimate and bound the error committed.
This paper proposes two lightweight strategies to find the minimal adversarial perturbation.
The obtained results show that the proposed strategies approximate the theoretical distance and robustness for samples close to the classification, leading to provable guarantees against any adversarial attacks.
arXiv Detail & Related papers (2022-01-04T16:40:03Z) - Keep it Tighter -- A Story on Analytical Mean Embeddings [0.6445605125467574]
Kernel techniques are among the most popular and flexible approaches in data science.
Mean embedding gives rise to a divergence measure referred to as maximum mean discrepancy (MMD)
In this paper we focus on the problem of MMD estimation when the mean embedding of one of the underlying distributions is available analytically.
arXiv Detail & Related papers (2021-10-15T21:29:27Z) - Non asymptotic estimation lower bounds for LTI state space models with
Cram\'er-Rao and van Trees [1.14219428942199]
We study the estimation problem for linear time-invariant (LTI) state-space models with Gaussian excitation of an unknown covariance.
We provide non lower bounds for the expected estimation error and the mean square estimation risk of the least square estimator.
Our results extend and improve existing lower bounds to lower bounds in expectation of the mean square estimation risk.
arXiv Detail & Related papers (2021-09-17T15:00:25Z) - Divergence Frontiers for Generative Models: Sample Complexity,
Quantization Level, and Frontier Integral [58.434753643798224]
Divergence frontiers have been proposed as an evaluation framework for generative models.
We establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers.
We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators.
arXiv Detail & Related papers (2021-06-15T06:26:25Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Robust Density Estimation under Besov IPM Losses [10.079698681921672]
We study minimax convergence rates of nonparametric density estimation in the Huber contamination model.
We show that a re-scaled thresholding wavelet series estimator achieves minimax optimal convergence rates under a wide variety of losses.
arXiv Detail & Related papers (2020-04-18T11:30:35Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.