Constrained Equation Learner Networks for Precision-Preserving
Extrapolation of Robotic Skills
- URL: http://arxiv.org/abs/2311.02475v1
- Date: Sat, 4 Nov 2023 18:16:18 GMT
- Title: Constrained Equation Learner Networks for Precision-Preserving
Extrapolation of Robotic Skills
- Authors: Hector Perez-Villeda, Justus Piater, and Matteo Saveriano
- Abstract summary: This paper presents a novel supervised learning framework that addresses the trajectory adaptation problem in Programming by Demonstrations.
We exploit Equation Learner Networks to learn a set of analytical expressions and use them as basis functions.
Our approach addresses three main difficulties in adapting robotic trajectories: 1) minimizing the distortion of the trajectory for new adaptations; 2) preserving the precision of the adaptations; and 3) dealing with the lack of intuition about the structure of basis functions.
- Score: 6.144680854063937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Programming by Demonstration, the robot learns novel skills from human
demonstrations. After learning, the robot should be able not only to reproduce
the skill, but also to generalize it to shifted domains without collecting new
training data. Adaptation to similar domains has been investigated in the
literature; however, an open problem is how to adapt learned skills to
different conditions that are outside of the data distribution, and, more
important, how to preserve the precision of the desired adaptations. This paper
presents a novel supervised learning framework called Constrained Equation
Learner Networks that addresses the trajectory adaptation problem in
Programming by Demonstrations from a constrained regression perspective. While
conventional approaches for constrained regression use one kind of basis
function, e.g., Gaussian, we exploit Equation Learner Networks to learn a set
of analytical expressions and use them as basis functions. These basis
functions are learned from demonstration with the objective to minimize
deviations from the training data while imposing constraints that represent the
desired adaptations, like new initial or final points or maintaining the
trajectory within given bounds. Our approach addresses three main difficulties
in adapting robotic trajectories: 1) minimizing the distortion of the
trajectory for new adaptations; 2) preserving the precision of the adaptations;
and 3) dealing with the lack of intuition about the structure of basis
functions. We validate our approach both in simulation and in real experiments
in a set of robotic tasks that require adaptation due to changes in the
environment, and we compare obtained results with two existing approaches.
Performed experiments show that Constrained Equation Learner Networks
outperform state of the art approaches by increasing generalization and
adaptability of robotic skills.
Related papers
- DeepONet as a Multi-Operator Extrapolation Model: Distributed Pretraining with Physics-Informed Fine-Tuning [6.635683993472882]
We propose a novel fine-tuning method to achieve multi-operator learning.
Our approach combines distributed learning to integrate data from various operators in pre-training, while physics-informed methods enable zero-shot fine-tuning.
arXiv Detail & Related papers (2024-11-11T18:58:46Z) - PETScML: Second-order solvers for training regression problems in Scientific Machine Learning [0.22499166814992438]
In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis.
We introduce a software built on top of the Portable and Extensible Toolkit for Scientific computation to bridge the gap between deep-learning software and conventional machine-learning techniques.
arXiv Detail & Related papers (2024-03-18T18:59:42Z) - In-Context Convergence of Transformers [63.04956160537308]
We study the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent.
For data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process.
arXiv Detail & Related papers (2023-10-08T17:55:33Z) - Continual Learning with Pretrained Backbones by Tuning in the Input
Space [44.97953547553997]
The intrinsic difficulty in adapting deep learning models to non-stationary environments limits the applicability of neural networks to real-world tasks.
We propose a novel strategy to make the fine-tuning procedure more effective, by avoiding to update the pre-trained part of the network and learning not only the usual classification head, but also a set of newly-introduced learnable parameters.
arXiv Detail & Related papers (2023-06-05T15:11:59Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - Minimax Lower Bounds for Transfer Learning with Linear and One-hidden
Layer Neural Networks [27.44348371795822]
We develop a statistical minimax framework to characterize the limits of transfer learning.
We derive a lower-bound for the target generalization error achievable by any algorithm as a function of the number of labeled source and target data.
arXiv Detail & Related papers (2020-06-16T22:49:26Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.