The Vector Poisson Channel: On the Linearity of the Conditional Mean
Estimator
- URL: http://arxiv.org/abs/2003.08967v1
- Date: Thu, 19 Mar 2020 18:21:33 GMT
- Title: The Vector Poisson Channel: On the Linearity of the Conditional Mean
Estimator
- Authors: Alex Dytso, Michael Fauss, and H. Vincent Poor
- Abstract summary: This work studies properties of the conditional mean estimator in vector Poisson noise.
The first result shows that the conditional mean estimator cannot be linear when the dark current parameter of the Poisson noise is non-zero.
The second result produces a quantitative refinement of the first result.
- Score: 82.5577471797883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work studies properties of the conditional mean estimator in vector
Poisson noise. The main emphasis is to study conditions on prior distributions
that induce linearity of the conditional mean estimator. The paper consists of
two main results. The first result shows that the only distribution that
induces the linearity of the conditional mean estimator is a product gamma
distribution. Moreover, it is shown that the conditional mean estimator cannot
be linear when the dark current parameter of the Poisson noise is non-zero. The
second result produces a quantitative refinement of the first result.
Specifically, it is shown that if the conditional mean estimator is close to
linear in a mean squared error sense, then the prior distribution must be close
to a product gamma distribution in terms of their characteristic functions.
Finally, the results are compared to their Gaussian counterparts.
Related papers
- $L^1$ Estimation: On the Optimality of Linear Estimators [64.76492306585168]
This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian.
In particular, it is demonstrated that if the conditional distribution $P_X|Y=y$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution.
arXiv Detail & Related papers (2023-09-17T01:45:13Z) - Robust Gaussian Process Regression with Huber Likelihood [2.7184224088243365]
We propose a robust process model in the Gaussian process framework with the likelihood of observed data expressed as the Huber probability distribution.
The proposed model employs weights based on projection statistics to scale residuals and bound the influence of vertical outliers and bad leverage points on the latent functions estimates.
arXiv Detail & Related papers (2023-01-19T02:59:33Z) - Off-the-grid prediction and testing for linear combination of translated features [2.774897240515734]
We consider a model where a signal (discrete or continuous) is observed with an additive Gaussian noise process.
We extend previous prediction results for off-the-grid estimators by taking into account that the scale parameter may vary.
We propose a procedure to test whether the features of the observed signal belong to a given finite collection.
arXiv Detail & Related papers (2022-12-02T13:48:45Z) - Off-policy estimation of linear functionals: Non-asymptotic theory for
semi-parametric efficiency [59.48096489854697]
The problem of estimating a linear functional based on observational data is canonical in both the causal inference and bandit literatures.
We prove non-asymptotic upper bounds on the mean-squared error of such procedures.
We establish its instance-dependent optimality in finite samples via matching non-asymptotic local minimax lower bounds.
arXiv Detail & Related papers (2022-09-26T23:50:55Z) - Efficient Truncated Linear Regression with Unknown Noise Variance [26.870279729431328]
We provide the first computationally and statistically efficient estimators for truncated linear regression when the noise variance is unknown.
Our estimator is based on an efficient implementation of Projected Gradient Descent on the negative-likelihood of the truncated sample.
arXiv Detail & Related papers (2022-08-25T12:17:37Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Bayesian Model Averaging for Causality Estimation and its Approximation
based on Gaussian Scale Mixture Distributions [0.0]
We first show from a Bayesian perspective that it is Bayes optimal to weight (average) the causal effects estimated under each model.
We develop an approximation to the Bayes optimal estimator by using Gaussian scale mixture distributions.
arXiv Detail & Related papers (2021-03-15T08:07:58Z) - Sharper Sub-Weibull Concentrations: Non-asymptotic Bai-Yin's Theorem [0.0]
Non-asymptotic concentration inequalities play an essential role in the finite-sample theory of machine learning and statistics.
We obtain a sharper and constants-specified concentration inequality for the summation of independent sub-Weibull random variables.
In the application of negative binomial regressions, we gives the $ell$-error with sparse structures, which is a new result for negative binomial regressions.
arXiv Detail & Related papers (2021-02-04T07:16:27Z) - Sequential prediction under log-loss and misspecification [47.66467420098395]
We consider the question of sequential prediction under the log-loss in terms of cumulative regret.
We show that cumulative regrets in the well-specified and misspecified cases coincideally.
We provide an $o(1)$ characterization of the distribution-free or PAC regret.
arXiv Detail & Related papers (2021-01-29T20:28:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.