Learning to Transfer Dynamic Models of Underactuated Soft Robotic Hands
- URL: http://arxiv.org/abs/2005.10418v1
- Date: Thu, 21 May 2020 01:46:59 GMT
- Title: Learning to Transfer Dynamic Models of Underactuated Soft Robotic Hands
- Authors: Liam Schramm, Avishai Sintov, and Abdeslam Boularias
- Abstract summary: Transfer learning is a popular approach to bypassing data limitations in one domain by leveraging data from another domain.
We show that in some situations this can lead to significantly worse performance than simply using the transferred model without adaptation.
We derive an upper bound on the Lyapunov exponent of a trained transition model, and demonstrate two approaches that make use of this insight.
- Score: 15.481728234509227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning is a popular approach to bypassing data limitations in one
domain by leveraging data from another domain. This is especially useful in
robotics, as it allows practitioners to reduce data collection with physical
robots, which can be time-consuming and cause wear and tear. The most common
way of doing this with neural networks is to take an existing neural network,
and simply train it more with new data. However, we show that in some
situations this can lead to significantly worse performance than simply using
the transferred model without adaptation. We find that a major cause of these
problems is that models trained on small amounts of data can have chaotic or
divergent behavior in some regions. We derive an upper bound on the Lyapunov
exponent of a trained transition model, and demonstrate two approaches that
make use of this insight. Both show significant improvement over traditional
fine-tuning. Experiments performed on real underactuated soft robotic hands
clearly demonstrate the capability to transfer a dynamic model from one hand to
another.
Related papers
- Transferable Post-training via Inverse Value Learning [83.75002867411263]
We propose modeling changes at the logits level during post-training using a separate neural network (i.e., the value network)
After training this network on a small base model using demonstrations, this network can be seamlessly integrated with other pre-trained models during inference.
We demonstrate that the resulting value network has broad transferability across pre-trained models of different parameter sizes.
arXiv Detail & Related papers (2024-10-28T13:48:43Z) - Learning-based adaption of robotic friction models [48.453527255659296]
We introduce a novel approach to adapt an existing friction model to new dynamics using as little data as possible.
Our proposed estimator outperforms the conventional model-based approach and the base neural network significantly.
Our method does not rely on data with external load during training, eliminating the need for external torque sensors.
arXiv Detail & Related papers (2023-10-25T14:50:15Z) - Efficiently Robustify Pre-trained Models [18.392732966487582]
robustness of large scale models towards real-world settings is still a less-explored topic.
We first benchmark the performance of these models under different perturbations and datasets.
We then discuss on how complete model fine-tuning based existing robustification schemes might not be a scalable option given very large scale networks.
arXiv Detail & Related papers (2023-09-14T08:07:49Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Cooperative data-driven modeling [44.99833362998488]
Data-driven modeling in mechanics is evolving rapidly based on recent machine learning advances.
New data and models created by different groups become available, opening possibilities for cooperative modeling.
Artificial neural networks suffer from catastrophic forgetting, i.e. they forget how to perform an old task when trained on a new one.
This hinders cooperation because adapting an existing model for a new task affects the performance on a previous task trained by someone else.
arXiv Detail & Related papers (2022-11-23T14:27:25Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Continual Learning with Transformers for Image Classification [12.028617058465333]
In computer vision, neural network models struggle to continually learn new concepts without forgetting what has been learnt in the past.
We develop a solution called Adaptive Distillation of Adapters (ADA), which is developed to perform continual learning.
We empirically demonstrate on different classification tasks that this method maintains a good predictive performance without retraining the model.
arXiv Detail & Related papers (2022-06-28T15:30:10Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.