Source Condition Double Robust Inference on Functionals of Inverse
Problems
- URL: http://arxiv.org/abs/2307.13793v1
- Date: Tue, 25 Jul 2023 19:54:46 GMT
- Title: Source Condition Double Robust Inference on Functionals of Inverse
Problems
- Authors: Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis
Syrgkanis, Masatoshi Uehara
- Abstract summary: We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems.
We provide the first source condition double robust inference method.
- Score: 71.42652863687117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider estimation of parameters defined as linear functionals of
solutions to linear inverse problems. Any such parameter admits a doubly robust
representation that depends on the solution to a dual linear inverse problem,
where the dual solution can be thought as a generalization of the inverse
propensity function. We provide the first source condition double robust
inference method that ensures asymptotic normality around the parameter of
interest as long as either the primal or the dual inverse problem is
sufficiently well-posed, without knowledge of which inverse problem is the more
well-posed one. Our result is enabled by novel guarantees for iterated Tikhonov
regularized adversarial estimators for linear inverse problems, over general
hypothesis spaces, which are developments of independent interest.
Related papers
- Half-VAE: An Encoder-Free VAE to Bypass Explicit Inverse Mapping [5.212606755867746]
Inference and inverse problems are closely related concepts, both fundamentally involving the deduction of unknown causes or parameters from observed data.
This study explores the potential of VAEs for solving inverse problems, such as Independent Component Analysis (ICA)
Unlike other VAE-based ICA methods, this approach discards the encoder in the VAE architecture, directly setting the latent variables as trainable parameters.
arXiv Detail & Related papers (2024-09-06T09:11:15Z) - Benign overfitting in Fixed Dimension via Physics-Informed Learning with Smooth Inductive Bias [8.668428992331808]
We develop an Sobolev norm learning curve for kernel ridge(less) regression when addressing (elliptical) linear inverse problems.
Our results show that the PDE operators in the inverse problem can stabilize the variance and even behave benign overfitting for fixed-dimensional problems.
arXiv Detail & Related papers (2024-06-13T14:54:30Z) - Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces [47.907236421762626]
This work studies discrete-time discounted Markov decision processes with continuous state and action spaces.
We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem.
arXiv Detail & Related papers (2024-05-24T12:53:07Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Variational Gaussian Processes For Linear Inverse Problems [0.0]
In inverse problems the parameter or signal of interest is observed only indirectly, as an image of a given map, and the observations are typically corrupted with noise.
Bayes offers a natural way to regularize these problems via the prior distribution and provides a probabilistic solution, quantifying the remaining uncertainty in the problem.
We consider a collection of inverse problems including the heat equation, Volterra operator and Radon transform and inducing variable methods based on population and empirical spectral features.
arXiv Detail & Related papers (2023-11-01T17:10:38Z) - Parallel Diffusion Models of Operator and Image for Blind Inverse
Problems [34.280463095974795]
Diffusion model-based inverse problem solvers have demonstrated state-of-the-art performance in cases where the forward operator is known.
We show that we can indeed solve a family of blind inverse problems by constructing another diffusion prior for the forward operator.
arXiv Detail & Related papers (2022-11-19T10:36:32Z) - Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields [53.31927549039624]
We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
arXiv Detail & Related papers (2021-07-13T12:31:06Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Solving Inverse Problems with a Flow-based Noise Model [100.18560761392692]
We study image inverse problems with a normalizing flow prior.
Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements.
We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns.
arXiv Detail & Related papers (2020-03-18T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.