Mathematical foundations of stable RKHSs
- URL: http://arxiv.org/abs/2005.02971v1
- Date: Wed, 6 May 2020 17:25:23 GMT
- Title: Mathematical foundations of stable RKHSs
- Authors: Mauro Bisiacco and Gianluigi Pillonetto
- Abstract summary: Reproducing kernel Hilbert spaces (RKHSs) are key spaces for machine learning that are becoming popular also for linear system identification.
In this paper we provide new structural properties of stable RKHSs.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reproducing kernel Hilbert spaces (RKHSs) are key spaces for machine learning
that are becoming popular also for linear system identification. In particular,
the so-called stable RKHSs can be used to model absolutely summable impulse
responses. In combination e.g. with regularized least squares they can then be
used to reconstruct dynamic systems from input-output data. In this paper we
provide new structural properties of stable RKHSs. The relation between stable
kernels and other fundamental classes, like those containing absolutely
summable or finite-trace kernels, is elucidated. These insights are then
brought into the feature space context. First, it is proved that any stable
kernel admits feature maps induced by a basis of orthogonal eigenvectors in l2.
The exact connection with classical system identification approaches that
exploit such kind of functions to model impulse responses is also provided.
Then, the necessary and sufficient stability condition for RKHSs designed by
formulating kernel eigenvectors and eigenvalues is obtained. Overall, our new
results provide novel mathematical foundations of stable RKHSs with impact on
stability tests, impulse responses modeling and computational efficiency of
regularized schemes for linear system identification.
Related papers
- Convolutional Filtering with RKHS Algebras [110.06688302593349]
We develop a theory of convolutional signal processing and neural networks for Reproducing Kernel Hilbert Spaces (RKHS)
We show that any RKHS allows the formal definition of multiple algebraic convolutional models.
We present a set of numerical experiments on real data in which wireless coverage is predicted from measurements captured by unmaned aerial vehicles.
arXiv Detail & Related papers (2024-11-02T18:53:44Z) - Absolute integrability of Mercer kernels is only sufficient for RKHS
stability [1.52292571922932]
Reproducing kernel Hilbert spaces (RKHSs) are special Hilbert spaces in one-to-one correspondence with positive definite maps called kernels.
Stable RKHSs contain only absolutely integrable impulse responses over the positive real line.
arXiv Detail & Related papers (2023-05-02T13:35:48Z) - On the stability test for reproducing kernel Hilbert spaces [1.52292571922932]
Reproducing kernel Hilbert spaces (RKHSs) are special Hilbert spaces where all the evaluation functionals are linear and bounded.
We show that the stability test can be reduced to the study of the kernel operator over test functions which assume only the values $pm 1$.
arXiv Detail & Related papers (2023-05-01T14:40:23Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Ensemble forecasts in reproducing kernel Hilbert space family [0.0]
A methodological framework for ensemble-based estimation and simulation of high dimensional dynamical systems is proposed.
To that end, the dynamical system is embedded in a family of reproducing kernel Hilbert spaces (RKHS) with kernel functions driven by the dynamics.
arXiv Detail & Related papers (2022-07-29T12:57:50Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Recurrent Equilibrium Networks: Flexible Dynamic Models with Guaranteed
Stability and Robustness [3.2872586139884623]
This paper introduces recurrent equilibrium networks (RENs) for applications in machine learning, system identification and control.
RENs are parameterized directly by quadratic vector in RN, i.e. stability and robustness are ensured without parameter constraints.
The paper also presents applications in data-driven nonlinear observer design and control with stability guarantees.
arXiv Detail & Related papers (2021-04-13T05:09:41Z) - Sinkhorn Natural Gradient for Generative Models [125.89871274202439]
We propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
arXiv Detail & Related papers (2020-11-09T02:51:17Z) - A Convex Parameterization of Robust Recurrent Neural Networks [3.2872586139884623]
Recurrent neural networks (RNNs) are a class of nonlinear dynamical systems often used to model sequence-to-sequence maps.
We formulate convex sets of RNNs with stability and robustness guarantees.
arXiv Detail & Related papers (2020-04-11T03:12:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.