Conditional expectation using compactification operators
- URL: http://arxiv.org/abs/2306.10592v4
- Date: Mon, 8 Jan 2024 15:49:33 GMT
- Title: Conditional expectation using compactification operators
- Authors: Suddhasattwa Das
- Abstract summary: This paper describes an operator theoretic approach to estimating the conditional expectation.
Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The separate tasks of denoising, least squares expectation, and manifold
learning can often be posed in a common setting of finding the conditional
expectations arising from a product of two random variables. This paper focuses
on this more general problem and describes an operator theoretic approach to
estimating the conditional expectation. Kernel integral operators are used as a
compactification tool, to set up the estimation problem as a linear inverse
problem in a reproducing kernel Hilbert space. This equation is shown to have
solutions that allow numerical approximation, thus guaranteeing the convergence
of data-driven implementations. The overall technique is easy to implement, and
their successful application to some real-world problems are also shown.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Variance Reduced Halpern Iteration for Finite-Sum Monotone Inclusions [18.086061048484616]
We study finite-sum monotone inclusion problems, which model broad classes of equilibrium problems.
Our main contributions are variants of the classical Halpern iteration that employ variance reduction to obtain improved complexity guarantees.
We argue that, up to poly-logarithmic factors, this complexity is unimprovable in the monotone Lipschitz setting.
arXiv Detail & Related papers (2023-10-04T17:24:45Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Variance-Reduced Splitting Schemes for Monotone Stochastic Generalized
Equations [0.0]
We consider monotone inclusion problems where the operators may be expectation-valued.
A direct application of splitting schemes is complicated by the need to resolve problems with expectation-valued maps at each step.
We propose an avenue for addressing uncertainty in the mapping: Variance-reduced modified forward-backward splitting scheme.
arXiv Detail & Related papers (2020-08-26T02:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.