Near-Efficient and Non-Asymptotic Multiway Inference
- URL: http://arxiv.org/abs/2511.05368v1
- Date: Fri, 07 Nov 2025 15:54:31 GMT
- Title: Near-Efficient and Non-Asymptotic Multiway Inference
- Authors: Oscar López, Arvind Prasadan, Carlos Llosa-Vite, Richard B. Lehoucq, Daniel M. Dunlavy,
- Abstract summary: We establish non-asymptotic efficiency guarantees for tensor decomposition-based inference in count data models.<n>A rank-constrained maximum-likelihood estimator achieves multiway analysis with variance matching the Cram'er-Rao Lower Bound.<n>For higher ranks, we illustrate that our multiway estimator may not attain the CRLB; nevertheless, CP-based parametric inference remains nearly minimax optimal.
- Score: 3.131740922192115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We establish non-asymptotic efficiency guarantees for tensor decomposition-based inference in count data models. Under a Poisson framework, we consider two related goals: (i) parametric inference, the estimation of the full distributional parameter tensor, and (ii) multiway analysis, the recovery of its canonical polyadic (CP) decomposition factors. Our main result shows that in the rank-one setting, a rank-constrained maximum-likelihood estimator achieves multiway analysis with variance matching the Cram\'{e}r-Rao Lower Bound (CRLB) up to absolute constants and logarithmic factors. This provides a general framework for studying "near-efficient" multiway estimators in finite-sample settings. For higher ranks, we illustrate that our multiway estimator may not attain the CRLB; nevertheless, CP-based parametric inference remains nearly minimax optimal, with error bounds that improve on prior work by offering more favorable dependence on the CP rank. Numerical experiments corroborate near-efficiency in the rank-one case and highlight the efficiency gap in higher-rank scenarios.
Related papers
- On the Optimal Construction of Unbiased Gradient Estimators for Zeroth-Order Optimization [57.179679246370114]
A potential limitation of existing methods is the bias inherent in most perturbation estimators unless a stepsize is proposed.<n>We propose a novel family of unbiased gradient scaling estimators that eliminate bias while maintaining favorable construction.
arXiv Detail & Related papers (2025-10-22T18:25:43Z) - Optimal Nuisance Function Tuning for Estimating a Doubly Robust Functional under Proportional Asymptotics [9.86496801565209]
We evaluate three existing ECC estimators and two sample splitting strategies for estimating the required nuisance functions.<n>We show that our bias correction strategy yields $sqrtn$-consistent estimators across different sample splitting strategies and estimator choices.<n>Our analysis reveals that prediction- tuning parameters (i.e., those that optimally estimate the nuisance functions) may not lead to the lowest variance of the ECC estimator.
arXiv Detail & Related papers (2025-09-29T21:46:14Z) - Gaussian Approximation and Multiplier Bootstrap for Stochastic Gradient Descent [14.19520637866741]
We establish the non-asymptotic validity of the multiplier bootstrap procedure for constructing confidence sets.<n>We derive approximation rates in convex distance of order up to $1/sqrtn$.
arXiv Detail & Related papers (2025-02-10T17:49:05Z) - Pareto-frontier Entropy Search with Variational Lower Bound Maximization [6.926467730065948]
We consider an approximation of the truncate distribution by using a mixture distribution consisting of two possible approximate truncations.<n>Since the optimal balance of the mixture is unknown beforehand, we propose optimizing the coefficient balancing through the variational lower bound framework.<n>Our empirical evaluation demonstrates the effectiveness of the proposed method particularly when the number objective functions is large.
arXiv Detail & Related papers (2025-01-31T12:03:17Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [55.80276145563105]
We investigate the statistical properties of Temporal Difference learning with Polyak-Ruppert averaging.<n>We make three theoretical contributions that improve upon the current state-of-the-art results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.<n>We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - A Correlation-induced Finite Difference Estimator [6.054123928890574]
We first provide a sample-driven method via the bootstrap technique to estimate the optimal perturbation, and then propose an efficient FD estimator based on correlated samples at the estimated optimal perturbation.
Numerical results confirm the efficiency of our estimators and align well with the theory presented, especially in scenarios with small sample sizes.
arXiv Detail & Related papers (2024-05-09T09:27:18Z) - Online Bootstrap Inference with Nonconvex Stochastic Gradient Descent
Estimator [0.0]
In this paper, we investigate the theoretical properties of gradient descent (SGD) for statistical inference in the context of convex problems.
We propose two coferential procedures which may contain multiple error minima.
arXiv Detail & Related papers (2023-06-03T22:08:10Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Off-policy estimation of linear functionals: Non-asymptotic theory for
semi-parametric efficiency [59.48096489854697]
The problem of estimating a linear functional based on observational data is canonical in both the causal inference and bandit literatures.
We prove non-asymptotic upper bounds on the mean-squared error of such procedures.
We establish its instance-dependent optimality in finite samples via matching non-asymptotic local minimax lower bounds.
arXiv Detail & Related papers (2022-09-26T23:50:55Z) - A New Central Limit Theorem for the Augmented IPW Estimator: Variance
Inflation, Cross-Fit Covariance and Beyond [0.9172870611255595]
Cross-fit inverse probability weighting (AIPW) with cross-fitting is a popular choice in practice.
We study this cross-fit AIPW estimator under well-specified outcome regression and propensity score models in a high-dimensional regime.
Our work utilizes a novel interplay between three distinct tools--approximate message passing theory, the theory of deterministic equivalents, and the leave-one-out approach.
arXiv Detail & Related papers (2022-05-20T14:17:53Z) - Online Statistical Inference for Stochastic Optimization via
Kiefer-Wolfowitz Methods [8.890430804063705]
We first present the distribution for the Polyak-Ruppert-averaging type Kiefer-Wolfowitz (AKW) estimators.
The distributional result reflects the trade-off between statistical efficiency and function query complexity.
arXiv Detail & Related papers (2021-02-05T19:22:41Z) - Finite Sample Analysis of Minimax Offline Reinforcement Learning:
Completeness, Fast Rates and First-Order Efficiency [83.02999769628593]
We offer a theoretical characterization of off-policy evaluation (OPE) in reinforcement learning.
We show that the minimax approach enables us to achieve a fast rate of convergence for weights and quality functions.
We present the first finite-sample result with first-order efficiency in non-tabular environments.
arXiv Detail & Related papers (2021-02-05T03:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.