Biomechanical surrogate modelling using stabilized vectorial greedy
kernel methods
- URL: http://arxiv.org/abs/2004.12670v2
- Date: Tue, 28 Apr 2020 07:25:55 GMT
- Title: Biomechanical surrogate modelling using stabilized vectorial greedy
kernel methods
- Authors: Bernard Haasdonk and Tizian Wenzel and Gabriele Santin and Syn Schmitt
- Abstract summary: Greedy kernel approximation algorithms are successful techniques for sparse and accurate data-based modelling and function approximation.
We introduce the so called $gamma$-restricted VKOGA, comment on analytical properties and present numerical evaluation on data from a clinically relevant application, the modelling of the human spine.
- Score: 0.2580765958706853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Greedy kernel approximation algorithms are successful techniques for sparse
and accurate data-based modelling and function approximation. Based on a recent
idea of stabilization of such algorithms in the scalar output case, we here
consider the vectorial extension built on VKOGA. We introduce the so called
$\gamma$-restricted VKOGA, comment on analytical properties and present
numerical evaluation on data from a clinically relevant application, the
modelling of the human spine. The experiments show that the new stabilized
algorithms result in improved accuracy and stability over the non-stabilized
algorithms.
Related papers
- The ODE Method for Stochastic Approximation and Reinforcement Learning with Markovian Noise [17.493808856903303]
One fundamental challenge in analyzing an approximation algorithm is to establish its stability.
We extend the celebrated Borkar-Meyn theorem for stability bounded from the Martingale difference noise setting to the Markovian noise setting.
arXiv Detail & Related papers (2024-01-15T17:20:17Z) - Efficient Computation of Sparse and Robust Maximum Association Estimators [0.4588028371034406]
Robust statistical estimators offer empirical precision but are often computationally challenging in high-dimensional sparse settings.
Modern association estimator techniques are utilized for outliers without imposing resilience against other robust methods.
arXiv Detail & Related papers (2023-11-29T11:57:50Z) - Finite-Sample Bounds for Adaptive Inverse Reinforcement Learning using Passive Langevin Dynamics [13.440621354486906]
This paper provides a finite-sample analysis of a passive gradient Langevin dynamics (PSGLD) algorithm.
Adaptive IRL aims to estimate the cost function of a forward learner performing a gradient algorithm.
arXiv Detail & Related papers (2023-04-18T16:39:51Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - A Closed Loop Gradient Descent Algorithm applied to Rosenbrock's
function [0.0]
We introduce a novel adaptive technique for an gradient system which finds application as a gradient descent algorithm for unconstrained inertial damping.
Also using Lyapunov stability analysis, we demonstrate the performance of the continuous numerical-time version of the algorithm.
arXiv Detail & Related papers (2021-08-29T17:25:24Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.