DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models
- URL: http://arxiv.org/abs/2311.17053v1
- Date: Tue, 28 Nov 2023 18:58:48 GMT
- Title: DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models
- Authors: Tsun-Hsuan Wang, Juntian Zheng, Pingchuan Ma, Yilun Du, Byungchul Kim,
Andrew Spielberg, Joshua Tenenbaum, Chuang Gan, Daniela Rus
- Abstract summary: We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
- Score: 102.13968267347553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nature evolves creatures with a high complexity of morphological and
behavioral intelligence, meanwhile computational methods lag in approaching
that diversity and efficacy. Co-optimization of artificial creatures'
morphology and control in silico shows promise for applications in physical
soft robotics and virtual character creation; such approaches, however, require
developing new learning algorithms that can reason about function atop pure
structure. In this paper, we present DiffuseBot, a physics-augmented diffusion
model that generates soft robot morphologies capable of excelling in a wide
spectrum of tasks. DiffuseBot bridges the gap between virtually generated
content and physical utility by (i) augmenting the diffusion process with a
physical dynamical simulation which provides a certificate of performance, and
(ii) introducing a co-design procedure that jointly optimizes physical design
and control by leveraging information about physical sensitivities from
differentiable simulation. We showcase a range of simulated and fabricated
robots along with their capabilities. Check our website at
https://diffusebot.github.io/
Related papers
- ReinDiffuse: Crafting Physically Plausible Motions with Reinforced Diffusion Model [9.525806425270428]
We present emphReinDiffuse that combines reinforcement learning with motion diffusion model to generate physically credible human motions.
Our method adapts Motion Diffusion Model to output a parameterized distribution of actions, making them compatible with reinforcement learning paradigms.
Our approach outperforms existing state-of-the-art models on two major datasets, HumanML3D and KIT-ML.
arXiv Detail & Related papers (2024-10-09T16:24:11Z) - Evolution and learning in differentiable robots [0.0]
We use differentiable simulations to rapidly and simultaneously optimize individual neural control of behavior across a large population of candidate body plans.
Non-differentiable changes to the mechanical structure of each robot in the population were applied by a genetic algorithm in an outer loop of search.
One of the highly differentiable morphologies discovered in simulation was realized as a physical robot and shown to retain its optimized behavior.
arXiv Detail & Related papers (2024-05-23T15:45:43Z) - DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse
Environments [111.91255476270526]
We introduce SoftZoo, a soft robot co-design platform for locomotion in diverse environments.
SoftZoo supports an extensive, naturally-inspired material set, including the ability to simulate environments such as flat ground, desert, wetland, clay, ice, snow, shallow water, and ocean.
It provides a variety of tasks relevant for soft robotics, including fast locomotion, agile turning, and path following, as well as differentiable design representations for morphology and control.
arXiv Detail & Related papers (2023-03-16T17:59:50Z) - RoboCraft: Learning to See, Simulate, and Shape Elasto-Plastic Objects
with Graph Networks [32.00371492516123]
We present a model-based planning framework for modeling and manipulating elasto-plastic objects.
Our system, RoboCraft, learns a particle-based dynamics model using graph neural networks (GNNs) to capture the structure of the underlying system.
We show through experiments that with just 10 minutes of real-world robotic interaction data, our robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various target shapes.
arXiv Detail & Related papers (2022-05-05T20:28:15Z) - Learning Material Parameters and Hydrodynamics of Soft Robotic Fish via
Differentiable Simulation [26.09104786491426]
Our framework allows high fidelity prediction of dynamic behavior for composite bi-morph bending structures in real hardware.
We demonstrate an experimentally-verified, fast optimization pipeline for learning the material parameters and hydrodynamics of our robots.
Although we focus on a specific application for underwater soft robots, our framework is applicable to any pneumatically actuated soft mechanism.
arXiv Detail & Related papers (2021-09-30T05:24:02Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - RoboTHOR: An Open Simulation-to-Real Embodied AI Platform [56.50243383294621]
We introduce RoboTHOR to democratize research in interactive and embodied visual AI.
We show there exists a significant gap between the performance of models trained in simulation when they are tested in both simulations and their carefully constructed physical analogs.
arXiv Detail & Related papers (2020-04-14T20:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.