An Alternate View on Optimal Filtering in an RKHS
- URL: http://arxiv.org/abs/2312.12318v1
- Date: Tue, 19 Dec 2023 16:43:17 GMT
- Title: An Alternate View on Optimal Filtering in an RKHS
- Authors: Benjamin Colburn, Jose C. Principe, Luis G. Sanchez Giraldo
- Abstract summary: Adaptive Filtering (KAF) are mathematically principled methods which search for a function in a Reproducing Kernel Space.
They are plagued by a linear relationship between number of training samples and model size, hampering their use on the very large data sets common in today's data saturated world.
We describe a novel view of optimal filtering which may provide a route towards solutions in a RKHS which do not necessarily have this linear growth in model size.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel Adaptive Filtering (KAF) are mathematically principled methods which
search for a function in a Reproducing Kernel Hilbert Space. While they work
well for tasks such as time series prediction and system identification they
are plagued by a linear relationship between number of training samples and
model size, hampering their use on the very large data sets common in today's
data saturated world. Previous methods try to solve this issue by
sparsification. We describe a novel view of optimal filtering which may provide
a route towards solutions in a RKHS which do not necessarily have this linear
growth in model size. We do this by defining a RKHS in which the time structure
of a stochastic process is still present. Using correntropy [11], an extension
of the idea of a covariance function, we create a time based functional which
describes some potentially nonlinear desired mapping function. This form of a
solution may provide a fruitful line of research for creating more efficient
representations of functionals in a RKHS, while theoretically providing
computational complexity in the test set similar to Wiener solution.
Related papers
- Highly Adaptive Ridge [84.38107748875144]
We propose a regression method that achieves a $n-2/3$ dimension-free L2 convergence rate in the class of right-continuous functions with square-integrable sectional derivatives.
Har is exactly kernel ridge regression with a specific data-adaptive kernel based on a saturated zero-order tensor-product spline basis expansion.
We demonstrate empirical performance better than state-of-the-art algorithms for small datasets in particular.
arXiv Detail & Related papers (2024-10-03T17:06:06Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - online and lightweight kernel-based approximated policy iteration for
dynamic p-norm linear adaptive filtering [8.319127681936815]
This paper introduces a solution to the problem of selecting dynamically (online) the optimal'' p-norm to combat outliers in linear adaptive filtering.
The proposed framework is built on kernel-based reinforcement learning (KBRL)
arXiv Detail & Related papers (2022-10-21T06:29:01Z) - Closed-Form Diffeomorphic Transformations for Time Series Alignment [0.0]
We present a closed-form expression for the ODE solution and its gradient under continuous piecewise-affine velocity functions.
Results show significant improvements both in terms of efficiency and accuracy.
arXiv Detail & Related papers (2022-06-16T12:02:12Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Unsupervised Reservoir Computing for Solving Ordinary Differential
Equations [1.6371837018687636]
unsupervised reservoir computing (RC), an echo-state recurrent neural network capable of discovering approximate solutions that satisfy ordinary differential equations (ODEs)
We use Bayesian optimization to efficiently discover optimal sets in a high-dimensional hyper- parameter space and numerically show that one set is robust and can be used to solve an ODE for different initial conditions and time ranges.
arXiv Detail & Related papers (2021-08-25T18:16:42Z) - The SKIM-FA Kernel: High-Dimensional Variable Selection and Nonlinear
Interaction Discovery in Linear Time [26.11563787525079]
We show how a kernel trick can reduce computation with suitable Bayesian models to O(# covariates) time for both variable selection and estimation.
Our approach outperforms existing methods used for large, high-dimensional datasets.
arXiv Detail & Related papers (2021-06-23T13:53:36Z) - Sample-Efficient Reinforcement Learning Is Feasible for Linearly
Realizable MDPs with Limited Revisiting [60.98700344526674]
Low-complexity models such as linear function representation play a pivotal role in enabling sample-efficient reinforcement learning.
In this paper, we investigate a new sampling protocol, which draws samples in an online/exploratory fashion but allows one to backtrack and revisit previous states in a controlled and infrequent manner.
We develop an algorithm tailored to this setting, achieving a sample complexity that scales practicallyly with the feature dimension, the horizon, and the inverse sub-optimality gap, but not the size of the state/action space.
arXiv Detail & Related papers (2021-05-17T17:22:07Z) - Semi-analytic approximate stability selection for correlated data in
generalized linear models [3.42658286826597]
We propose a novel approximate inference algorithm that can conduct Stability Selection without the repeated fitting.
The algorithm is based on the replica method of statistical mechanics and vector approximate message passing of information theory.
Numerical experiments indicate that the algorithm exhibits fast convergence and high approximation accuracy for both synthetic and real-world data.
arXiv Detail & Related papers (2020-03-19T10:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.