Differentiable and Learnable Robot Models
- URL: http://arxiv.org/abs/2202.11217v1
- Date: Tue, 22 Feb 2022 22:26:36 GMT
- Title: Differentiable and Learnable Robot Models
- Authors: Franziska Meier and Austin Wang and Giovanni Sutanto and Yixin Lin and
Paarth Shah
- Abstract summary: Library emphDifferentiable Robot Models implements both emphdifferentiable and emphlearnable models of the kinematics and dynamics of robots in Pytorch.
- Score: 6.988699529097697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building differentiable simulations of physical processes has recently
received an increasing amount of attention. Specifically, some efforts develop
differentiable robotic physics engines motivated by the computational benefits
of merging rigid body simulations with modern differentiable machine learning
libraries. Here, we present a library that focuses on the ability to combine
data driven methods with analytical rigid body computations. More concretely,
our library \emph{Differentiable Robot Models} implements both
\emph{differentiable} and \emph{learnable} models of the kinematics and
dynamics of robots in Pytorch. The source-code is available at
\url{https://github.com/facebookresearch/differentiable-robot-model}
Related papers
- Knowledge-based Neural Ordinary Differential Equations for Cosserat Rod-based Soft Robots [10.511173252165287]
It is difficult to model the dynamics of soft robots due to their high spatial dimensionality.
Deep learning algorithms have shown promises in data-driven modeling of soft robots.
We propose KNODE-Cosserat, a framework that combines first-principle physics models and neural ordinary differential equations.
arXiv Detail & Related papers (2024-08-14T19:07:28Z) - DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Full-Body Visual Self-Modeling of Robot Morphologies [29.76701883250049]
Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions.
Recent progress in fully data-driven self-modeling has enabled machines to learn their own forward kinematics directly from task-agnostic interaction data.
Here, we propose that instead of directly modeling forward-kinematics, a more useful form of self-modeling is one that could answer space occupancy queries.
arXiv Detail & Related papers (2021-11-11T18:58:07Z) - Robot Learning from Randomized Simulations: A Review [59.992761565399185]
Deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
State-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive.
We focus on a technique named 'domain randomization' which is a method for learning from randomized simulations.
arXiv Detail & Related papers (2021-11-01T13:55:41Z) - Learning Cross-Domain Correspondence for Control with Dynamics
Cycle-Consistency [60.39133304370604]
We learn to align dynamic robot behavior across two domains using a cycle-consistency constraint.
Our framework is able to align uncalibrated monocular video of a real robot arm to dynamic state-action trajectories of a simulated arm without paired data.
arXiv Detail & Related papers (2020-12-17T18:22:25Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - Encoding Physical Constraints in Differentiable Newton-Euler Algorithm [22.882483860087948]
In this work, we incorporate physical constraints in the learning by adding structure to the learned parameters.
We evaluate our method on real-time inverse dynamics control tasks on a 7 degree of freedom robot arm.
arXiv Detail & Related papers (2020-01-24T02:08:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.