A Variational Infinite Mixture for Probabilistic Inverse Dynamics
Learning
- URL: http://arxiv.org/abs/2011.05217v3
- Date: Tue, 30 Mar 2021 08:51:07 GMT
- Title: A Variational Infinite Mixture for Probabilistic Inverse Dynamics
Learning
- Authors: Hany Abdulsamad, Peter Nickl, Pascal Klink, Jan Peters
- Abstract summary: We develop an efficient variational Bayes inference technique for infinite mixtures of probabilistic local models.
We highlight the model's power in combining data-driven adaptation, fast prediction and the ability to deal with discontinuous functions and heteroscedastic noise.
We use the learned models for online dynamics control of a Barrett-WAM manipulator, significantly improving the trajectory tracking performance.
- Score: 34.90240171916858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic regression techniques in control and robotics applications have
to fulfill different criteria of data-driven adaptability, computational
efficiency, scalability to high dimensions, and the capacity to deal with
different modalities in the data. Classical regressors usually fulfill only a
subset of these properties. In this work, we extend seminal work on Bayesian
nonparametric mixtures and derive an efficient variational Bayes inference
technique for infinite mixtures of probabilistic local polynomial models with
well-calibrated certainty quantification. We highlight the model's power in
combining data-driven complexity adaptation, fast prediction and the ability to
deal with discontinuous functions and heteroscedastic noise. We benchmark this
technique on a range of large real inverse dynamics datasets, showing that the
infinite mixture formulation is competitive with classical Local Learning
methods and regularizes model complexity by adapting the number of components
based on data and without relying on heuristics. Moreover, to showcase the
practicality of the approach, we use the learned models for online inverse
dynamics control of a Barrett-WAM manipulator, significantly improving the
trajectory tracking performance.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Parallel and Limited Data Voice Conversion Using Stochastic Variational
Deep Kernel Learning [2.5782420501870296]
This paper proposes a voice conversion method that works with limited data.
It is based on variational deep kernel learning (SVDKL)
It is possible to estimate non-smooth and more complex functions.
arXiv Detail & Related papers (2023-09-08T16:32:47Z) - Variational Hierarchical Mixtures for Probabilistic Learning of Inverse
Dynamics [20.953728061894044]
Well-calibrated probabilistic regression models are a crucial learning component in robotics applications as datasets grow rapidly and tasks become more complex.
We consider a probabilistic hierarchical modeling paradigm that combines the benefits of both worlds to deliver computationally efficient representations with inherent complexity regularization.
We derive two efficient variational inference techniques to learn these representations and highlight the advantages of hierarchical infinite local regression models.
arXiv Detail & Related papers (2022-11-02T13:54:07Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Compositional Modeling of Nonlinear Dynamical Systems with ODE-based
Random Features [0.0]
We present a novel, domain-agnostic approach to tackling this problem.
We use compositions of physics-informed random features, derived from ordinary differential equations.
We find that our approach achieves comparable performance to a number of other probabilistic models on benchmark regression tasks.
arXiv Detail & Related papers (2021-06-10T17:55:13Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Variational Model-based Policy Optimization [34.80171122943031]
Model-based reinforcement learning (RL) algorithms allow us to combine model-generated data with those collected from interaction with the real system in order to alleviate the data efficiency problem in RL.
We propose an objective function as a variational lower-bound of a log-likelihood of a log-likelihood to jointly learn and improve model and policy.
Our experiments on a number of continuous control tasks show that despite being more complex, our model-based (E-step) algorithm, called emactoral model-based policy optimization (VMBPO), is more sample-efficient and
arXiv Detail & Related papers (2020-06-09T18:30:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.