DiffSRL: Learning Dynamic-aware State Representation for Deformable
Object Control with Differentiable Simulator
- URL: http://arxiv.org/abs/2110.12352v1
- Date: Sun, 24 Oct 2021 04:53:58 GMT
- Title: DiffSRL: Learning Dynamic-aware State Representation for Deformable
Object Control with Differentiable Simulator
- Authors: Sirui Chen, Yunhao Liu, Jialong Li, Shang Wen Yao, Tingxiang Fan, Jia
Pan
- Abstract summary: Latent space that can capture dynamics related information has wide application in areas such as accelerating model free reinforcement learning.
We propose DiffSRL, a dynamic state representation learning pipeline utilizing differentiable simulation.
Our model demonstrates superior performance in terms of capturing long-term dynamics as well as reward prediction.
- Score: 26.280021036447213
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Dynamic state representation learning is an important task in robot learning.
Latent space that can capture dynamics related information has wide application
in areas such as accelerating model free reinforcement learning, closing the
simulation to reality gap, as well as reducing the motion planning complexity.
However, current dynamic state representation learning methods scale poorly on
complex dynamic systems such as deformable objects, and cannot directly embed
well defined simulation function into the training pipeline. We propose
DiffSRL, a dynamic state representation learning pipeline utilizing
differentiable simulation that can embed complex dynamics models as part of the
end-to-end training. We also integrate differentiable dynamic constraints as
part of the pipeline which provide incentives for the latent state to be aware
of dynamical constraints. We further establish a state representation learning
benchmark on a soft-body simulation system, PlasticineLab, and our model
demonstrates superior performance in terms of capturing long-term dynamics as
well as reward prediction.
Related papers
- SOLD: Reinforcement Learning with Slot Object-Centric Latent Dynamics [16.020835290802548]
Slot-Attention for Object-centric Latent Dynamics is a novel algorithm that learns object-centric dynamics models from pixel inputs.
We demonstrate that the structured latent space not only improves model interpretability but also provides a valuable input space for behavior models to reason over.
Our results show that SOLD outperforms DreamerV3, a state-of-the-art model-based RL algorithm, across a range of benchmark robotic environments.
arXiv Detail & Related papers (2024-10-11T14:03:31Z) - Deep Learning for Koopman-based Dynamic Movement Primitives [0.0]
We propose a novel approach by joining the theories of Koopman Operators and Dynamic Movement Primitives to Learning from Demonstration.
Our approach, named glsadmd, projects nonlinear dynamical systems into linear latent spaces such that a solution reproduces the desired complex motion.
Our results are comparable to the Extended Dynamic Mode Decomposition on the LASA Handwriting dataset but with training on only a small fractions of the letters.
arXiv Detail & Related papers (2023-12-06T07:33:22Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Causal Dynamics Learning for Task-Independent State Abstraction [61.707048209272884]
We introduce Causal Dynamics Learning for Task-Independent State Abstraction (CDL)
CDL learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action.
A state abstraction can then be derived from the learned dynamics.
arXiv Detail & Related papers (2022-06-27T17:02:53Z) - Learning Individual Interactions from Population Dynamics with Discrete-Event Simulation Model [9.827590402695341]
We will explore the possibility of learning a discrete-event simulation representation of complex system dynamics.
Our results show that the algorithm can data-efficiently capture complex network dynamics in several fields with meaningful events.
arXiv Detail & Related papers (2022-05-04T21:33:56Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object
Manipulation [135.10594078615952]
We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects.
A benchmark contains over 17,000 action trajectories with six types of plush toys and 78 variants.
Our model achieves the best performance in geometry, correspondence, and dynamics predictions.
arXiv Detail & Related papers (2022-03-14T04:56:55Z) - Objective-aware Traffic Simulation via Inverse Reinforcement Learning [31.26257563160961]
We formulate traffic simulation as an inverse reinforcement learning problem.
We propose a parameter sharing adversarial inverse reinforcement learning model for dynamics-robust simulation learning.
Our proposed model is able to imitate a vehicle's trajectories in the real world while simultaneously recovering the reward function.
arXiv Detail & Related papers (2021-05-20T07:26:34Z) - Context-aware Dynamics Model for Generalization in Model-Based
Reinforcement Learning [124.9856253431878]
We decompose the task of learning a global dynamics model into two stages: (a) learning a context latent vector that captures the local dynamics, then (b) predicting the next state conditioned on it.
In order to encode dynamics-specific information into the context latent vector, we introduce a novel loss function that encourages the context latent vector to be useful for predicting both forward and backward dynamics.
The proposed method achieves superior generalization ability across various simulated robotics and control tasks, compared to existing RL schemes.
arXiv Detail & Related papers (2020-05-14T08:10:54Z) - Automatic Differentiation and Continuous Sensitivity Analysis of Rigid
Body Dynamics [15.565726546970678]
We introduce a differentiable physics simulator for rigid body dynamics.
In the context of trajectory optimization, we introduce a closed-loop model-predictive control algorithm.
arXiv Detail & Related papers (2020-01-22T03:54:00Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.