Adaptive Estimation and Inference in Conditional Moment Models via the Discrepancy Principle
- URL: http://arxiv.org/abs/2603.01337v1
- Date: Mon, 02 Mar 2026 00:23:20 GMT
- Title: Adaptive Estimation and Inference in Conditional Moment Models via the Discrepancy Principle
- Authors: Jiyuan Tan, Vasilis Syrgkanis,
- Abstract summary: We study adaptive estimation and inference in ill-posed linear inverse problems defined by conditional moment restrictions.<n>Existing regularized estimators such as Regularized DeepIV (RDIV) require prior knowledge of the smoothness of the nuisance function.
- Score: 17.447741518678374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study adaptive estimation and inference in ill-posed linear inverse problems defined by conditional moment restrictions. Existing regularized estimators such as Regularized DeepIV (RDIV) require prior knowledge of the smoothness of the nuisance function, typically encoded by a beta source condition to tune their regularization parameters. In practice, this smoothness is unknown, and misspecified hyperparameters can lead to suboptimal convergence or instability. We introduce a discrepancy-principle-based framework for adaptive hyperparameter selection that automatically balances bias and variance without relying on the unknown smoothness parameter. Our framework applies to both RDIV (Li et al. [2024]) and the Tikhonov Regularized Adversarial Estimator (TRAE) (Bennett et al. [2023a]) and achieves the same rates in both weak and strong metrics. Building on this, we construct a fully adaptive doubly robust estimator for linear functionals that attains the optimal rate of the better-conditioned primal or dual problem, providing a practical, theoretically grounded approach for adaptive inference in ill-posed econometric models.
Related papers
- Adaptive Benign Overfitting (ABO): Overparameterized RLS for Online Learning in Non-stationary Time-series [0.0]
ABO is highly accurate (comparable to baseline kernel methods) while achieving speed improvements of between 20 and 40 percent.<n>Results provide a unified view linking adaptive filtering, kernel approximation, and benign overfitting within a stable online learning framework.
arXiv Detail & Related papers (2026-01-29T15:58:01Z) - Rectified Robust Policy Optimization for Model-Uncertain Constrained Reinforcement Learning without Strong Duality [53.525547349715595]
We propose a novel primal-only algorithm called Rectified Robust Policy Optimization (RRPO)<n>RRPO operates directly on the primal problem without relying on dual formulations.<n>We show convergence to an approximately optimal feasible policy with complexity matching the best-known lower bound.
arXiv Detail & Related papers (2025-08-24T16:59:38Z) - Principled Input-Output-Conditioned Post-Hoc Uncertainty Estimation for Regression Networks [1.4671424999873808]
Uncertainty is critical in safety-sensitive applications but is often omitted from off-the-shelf neural networks due to adverse effects on predictive performance.<n>We propose a theoretically grounded framework for post-hoc uncertainty estimation in regression tasks by fitting an auxiliary model to both original inputs and frozen model outputs.
arXiv Detail & Related papers (2025-06-01T09:13:27Z) - Norm-Bounded Low-Rank Adaptation [11.263496225606126]
We propose norm-bounded low-rank adaptation (NB-LoRA) for parameter-efficient fine tuning.<n> NB-LoRA is a novel parameterization of low-rank weight adaptations that admits explicit bounds on each singular value of the adaptation matrix.<n>Experiments show that NB-LoRA can avoid model forgetting without minor cost on adaptation performance.
arXiv Detail & Related papers (2025-01-31T11:24:57Z) - Semiparametric Double Reinforcement Learning with Applications to Long-Term Causal Inference [33.14076284663493]
Long-term causal effects must be estimated from short-term data.<n>MDPs provide a natural framework for capturing such long-term dynamics.<n>Nonparametric implementations require strong intertemporal overlap assumptions.<n>We introduce a novel plug-in estimator based on isotonic Bellman calibration.
arXiv Detail & Related papers (2025-01-12T20:35:28Z) - Adaptive Conformal Inference by Betting [51.272991377903274]
We consider the problem of adaptive conformal inference without any assumptions about the data generating process.<n>Existing approaches for adaptive conformal inference are based on optimizing the pinball loss using variants of online gradient descent.<n>We propose a different approach for adaptive conformal inference that leverages parameter-free online convex optimization techniques.
arXiv Detail & Related papers (2024-12-26T18:42:08Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.