On High dimensional Poisson models with measurement error: hypothesis
testing for nonlinear nonconvex optimization
- URL: http://arxiv.org/abs/2301.00139v1
- Date: Sat, 31 Dec 2022 06:58:42 GMT
- Title: On High dimensional Poisson models with measurement error: hypothesis
testing for nonlinear nonconvex optimization
- Authors: Fei Jiang, Yeqing Zhou, Jianxuan Liu, Yanyuan Ma
- Abstract summary: We estimation and testing regression model with high dimensionals, which has wide applications in analyzing data.
We propose to estimate regression parameter through minimizing penalized consistency.
The proposed method is applied to the Alzheimer's Disease Initiative.
- Score: 13.369004892264146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study estimation and testing in the Poisson regression model with noisy
high dimensional covariates, which has wide applications in analyzing noisy big
data. Correcting for the estimation bias due to the covariate noise leads to a
non-convex target function to minimize. Treating the high dimensional issue
further leads us to augment an amenable penalty term to the target function. We
propose to estimate the regression parameter through minimizing the penalized
target function. We derive the L1 and L2 convergence rates of the estimator and
prove the variable selection consistency. We further establish the asymptotic
normality of any subset of the parameters, where the subset can have infinitely
many components as long as its cardinality grows sufficiently slow. We develop
Wald and score tests based on the asymptotic normality of the estimator, which
permits testing of linear functions of the members if the subset. We examine
the finite sample performance of the proposed tests by extensive simulation.
Finally, the proposed method is successfully applied to the Alzheimer's Disease
Neuroimaging Initiative study, which motivated this work initially.
Related papers
- A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimiax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Risk-Sensitive Diffusion for Perturbation-Robust Optimization [58.68233326265417]
We show that noisy samples incur another objective function, rather than the one with score function, which will wrongly optimize the model.
We introduce risk-sensitive SDE, a type of differential equation (SDE) parameterized by the risk vector.
We prove that zero instability measure is only achievable in the case where noisy samples are caused by Gaussian perturbation.
arXiv Detail & Related papers (2024-02-03T08:41:51Z) - Max-affine regression via first-order methods [7.12511675782289]
The max-affine model ubiquitously arises in applications in signal processing and statistics.
We present a non-asymptotic convergence analysis of gradient descent (GD) and mini-batch gradient descent (SGD) for max-affine regression.
arXiv Detail & Related papers (2023-08-15T23:46:44Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Off-policy estimation of linear functionals: Non-asymptotic theory for
semi-parametric efficiency [59.48096489854697]
The problem of estimating a linear functional based on observational data is canonical in both the causal inference and bandit literatures.
We prove non-asymptotic upper bounds on the mean-squared error of such procedures.
We establish its instance-dependent optimality in finite samples via matching non-asymptotic local minimax lower bounds.
arXiv Detail & Related papers (2022-09-26T23:50:55Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Minimax Estimation of Conditional Moment Models [40.95498063465325]
We introduce a min-max criterion function, under which the estimation problem can be thought of as solving a zero-sum game.
We analyze the statistical estimation rate of the resulting estimator for arbitrary hypothesis spaces.
We show how our modified mean squared error rate, combined with conditions that bound the ill-posedness of the inverse problem, lead to mean squared error rates.
arXiv Detail & Related papers (2020-06-12T14:02:38Z) - On the Estimation of Derivatives Using Plug-in Kernel Ridge Regression
Estimators [4.392844455327199]
We propose a simple plug-in kernel ridge regression (KRR) estimator in nonparametric regression.
We provide a non-asymotic analysis to study the behavior of the proposed estimator in a unified manner.
The proposed estimator achieves the optimal rate of convergence with the same choice of tuning parameter for any order of derivatives.
arXiv Detail & Related papers (2020-06-02T02:32:39Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z) - Online stochastic gradient descent on non-convex losses from
high-dimensional inference [2.2344764434954256]
gradient descent (SGD) is a popular algorithm for optimization problems in high-dimensional tasks.
In this paper we produce an estimator of non-trivial correlation from data.
We illustrate our approach by applying it to a set of tasks such as phase retrieval, and estimation for generalized models.
arXiv Detail & Related papers (2020-03-23T17:34:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.