Residual Model Learning for Microrobot Control
- URL: http://arxiv.org/abs/2104.00631v1
- Date: Thu, 1 Apr 2021 17:22:50 GMT
- Title: Residual Model Learning for Microrobot Control
- Authors: Joshua Gruenstein, Tao Chen, Neel Doshi, Pulkit Agrawal
- Abstract summary: We propose a novel framework residual model learning (RML) that leverages approximate models to reduce the sample complexity associated with learning an accurate robot model.
We show that using RML, we can learn a model of the Harvard Ambulatory MicroRobot (HAMR) using just 12 seconds of passively collected interaction data.
The learned model is accurate enough to be leveraged as "proxy-simulator" for learning walking and turning behaviors using model-free reinforcement learning algorithms.
- Score: 17.22836165560292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A majority of microrobots are constructed using compliant materials that are
difficult to model analytically, limiting the utility of traditional
model-based controllers. Challenges in data collection on microrobots and large
errors between simulated models and real robots make current model-based
learning and sim-to-real transfer methods difficult to apply. We propose a
novel framework residual model learning (RML) that leverages approximate models
to substantially reduce the sample complexity associated with learning an
accurate robot model. We show that using RML, we can learn a model of the
Harvard Ambulatory MicroRobot (HAMR) using just 12 seconds of passively
collected interaction data. The learned model is accurate enough to be
leveraged as "proxy-simulator" for learning walking and turning behaviors using
model-free reinforcement learning algorithms. RML provides a general framework
for learning from extremely small amounts of interaction data, and our
experiments with HAMR clearly demonstrate that RML substantially outperforms
existing techniques.
Related papers
- Accelerating Large Language Model Pretraining via LFR Pedagogy: Learn, Focus, and Review [50.78587571704713]
Large Language Model (LLM) pretraining traditionally relies on autoregressive language modeling on randomly sampled data blocks from web-scale datasets.
We take inspiration from human learning techniques like spaced repetition to hypothesize that random data sampling for LLMs leads to high training cost and low quality models which tend to forget data.
In order to effectively commit web-scale information to long-term memory, we propose the LFR (Learn, Focus, and Review) pedagogy.
arXiv Detail & Related papers (2024-09-10T00:59:18Z) - Comparative Evaluation of Learning Models for Bionic Robots: Non-Linear Transfer Function Identifications [0.0]
The control and modeling of robot dynamics have increasingly adopted model-free control strategies using machine learning.
This research introduces a comprehensive evaluation strategy and framework for the application of model-free control.
arXiv Detail & Related papers (2024-07-02T17:00:23Z) - Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study [61.64685376882383]
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
This paper investigates the robustness of existing CLTR models in complex and diverse situations.
We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation.
arXiv Detail & Related papers (2024-04-04T10:54:38Z) - Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation [8.940998315746684]
We propose a model-based reinforcement learning (RL) approach for robotic arm end-tasks.
We employ Bayesian neural network models to represent, in a probabilistic way, both the belief and information encoded in the dynamic model during exploration.
Our experiments show the advantages of our Bayesian model-based RL approach, with similar quality in the results than relevant alternatives.
arXiv Detail & Related papers (2024-04-02T11:44:37Z) - SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering [49.78647219715034]
We propose a sensing-aware model-based reinforcement learning system called SAM-RL.
With the sensing-aware learning pipeline, SAM-RL allows a robot to select an informative viewpoint to monitor the task process.
We apply our framework to real world experiments for accomplishing three manipulation tasks: robotic assembly, tool manipulation, and deformable object manipulation.
arXiv Detail & Related papers (2022-10-27T05:30:43Z) - Fitting a Directional Microstructure Model to Diffusion-Relaxation MRI
Data with Self-Supervised Machine Learning [2.8167227950959206]
Self-supervised machine learning is emerging as an attractive alternative to supervised learning.
In this paper, we demonstrate self-supervised machine learning model fitting for a directional microstructural model.
Our approach shows clear improvements in parameter estimation and computational time, compared to standard non-linear least squares fitting.
arXiv Detail & Related papers (2022-10-05T15:51:39Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Structured Mechanical Models for Robot Learning and Control [38.52004843488286]
Black-box neural networks suffer from data-inefficiency and the difficulty to incorporate prior knowledge.
We introduce Structured Mechanical Models that are data-efficient, easily amenable to prior knowledge, and easily usable with model-based control techniques.
We demonstrate that they generalize better from limited data and yield more reliable model-based controllers on a variety of simulated robotic domains.
arXiv Detail & Related papers (2020-04-21T21:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.