Cross-domain Transfer Learning and State Inference for Soft Robots via a
Semi-supervised Sequential Variational Bayes Framework
- URL: http://arxiv.org/abs/2303.01693v3
- Date: Fri, 25 Aug 2023 16:50:16 GMT
- Title: Cross-domain Transfer Learning and State Inference for Soft Robots via a
Semi-supervised Sequential Variational Bayes Framework
- Authors: Shageenderan Sapai, Junn Yong Loo, Ze Yang Ding, Chee Pin Tan, Raphael
CW Phan, Vishnu Monn Baskaran, Surya Girinatha Nurzaman
- Abstract summary: We propose a semi-supervised sequential variational Bayes (DSVB) framework for transfer learning and state inference in soft robots with missing state labels.
Unlike existing transfer learning approaches, our proposed DSVB employs a recurrent neural network to model the nonlinear dynamics and temporal coherence in soft robot data.
- Score: 7.9900681281556745
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, data-driven models such as deep neural networks have shown to be
promising tools for modelling and state inference in soft robots. However,
voluminous amounts of data are necessary for deep models to perform
effectively, which requires exhaustive and quality data collection,
particularly of state labels. Consequently, obtaining labelled state data for
soft robotic systems is challenged for various reasons, including difficulty in
the sensorization of soft robots and the inconvenience of collecting data in
unstructured environments. To address this challenge, in this paper, we propose
a semi-supervised sequential variational Bayes (DSVB) framework for transfer
learning and state inference in soft robots with missing state labels on
certain robot configurations. Considering that soft robots may exhibit distinct
dynamics under different robot configurations, a feature space transfer
strategy is also incorporated to promote the adaptation of latent features
across multiple configurations. Unlike existing transfer learning approaches,
our proposed DSVB employs a recurrent neural network to model the nonlinear
dynamics and temporal coherence in soft robot data. The proposed framework is
validated on multiple setup configurations of a pneumatic-based soft robot
finger. Experimental results on four transfer scenarios demonstrate that DSVB
performs effective transfer learning and accurate state inference amidst
missing state labels. The data and code are available at
https://github.com/shageenderan/DSVB.
Related papers
- Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents [109.3804962220498]
AutoRT is a system to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision.
We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies.
We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
arXiv Detail & Related papers (2024-01-23T18:45:54Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Efficient Model Adaptation for Continual Learning at the Edge [15.334881190102895]
Most machine learning (ML) systems assume stationary and matching data distributions during training and deployment.
Data distributions often shift over time due to changes in environmental factors, sensor characteristics, and task-of-interest.
This paper presents theAdaptor-Reconfigurator (EAR) framework for efficient continual learning under domain shifts.
arXiv Detail & Related papers (2023-08-03T23:55:17Z) - TrainSim: A Railway Simulation Framework for LiDAR and Camera Dataset
Generation [1.2165229201148093]
This paper presents a visual simulation framework able to generate realistic railway scenarios in a virtual environment.
It automatically produces inertial data and labeled datasets from emulated LiDARs and cameras.
arXiv Detail & Related papers (2023-02-28T11:00:13Z) - Towards Precise Model-free Robotic Grasping with Sim-to-Real Transfer
Learning [11.470950882435927]
We present an end-to-end robotic grasping network with a grasp.
In physical robotic experiments, our grasping framework grasped single known objects and novel complex-shaped household objects with a success rate of 90.91%.
The proposed grasping framework outperformed two state-of-the-art methods in both known and unknown object robotic grasping.
arXiv Detail & Related papers (2023-01-28T16:57:19Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Multi-Modal Anomaly Detection for Unstructured and Uncertain
Environments [5.677685109155077]
Modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision.
We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments.
Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.
arXiv Detail & Related papers (2020-12-15T21:59:58Z) - A data-set of piercing needle through deformable objects for Deep
Learning from Demonstrations [0.21096737598952847]
This paper presents a dataset of inserting/piercing a needle with two arms of da Vinci Research Kit in/through soft tissues.
We implement several deep RLfD architectures, including simple feed-forward CNNs and different Recurrent Convolutional Networks (RCNs)
Our study indicates RCNs improve the prediction accuracy of the model despite that the baseline feed-forward CNNs successfully learns the relationship between the visual information and the next step control actions of the robot.
arXiv Detail & Related papers (2020-12-04T08:27:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.