Evolution and learning in differentiable robots
- URL: http://arxiv.org/abs/2405.14712v2
- Date: Sun, 26 May 2024 17:24:12 GMT
- Title: Evolution and learning in differentiable robots
- Authors: Luke Strgar, David Matthews, Tyler Hummer, Sam Kriegman,
- Abstract summary: We use differentiable simulations to rapidly and simultaneously optimize individual neural control of behavior across a large population of candidate body plans.
Non-differentiable changes to the mechanical structure of each robot in the population were applied by a genetic algorithm in an outer loop of search.
One of the highly differentiable morphologies discovered in simulation was realized as a physical robot and shown to retain its optimized behavior.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The automatic design of robots has existed for 30 years but has been constricted by serial non-differentiable design evaluations, premature convergence to simple bodies or clumsy behaviors, and a lack of sim2real transfer to physical machines. Thus, here we employ massively-parallel differentiable simulations to rapidly and simultaneously optimize individual neural control of behavior across a large population of candidate body plans and return a fitness score for each design based on the performance of its fully optimized behavior. Non-differentiable changes to the mechanical structure of each robot in the population -- mutations that rearrange, combine, add, or remove body parts -- were applied by a genetic algorithm in an outer loop of search, generating a continuous flow of novel morphologies with highly-coordinated and graceful behaviors honed by gradient descent. This enabled the exploration of several orders-of-magnitude more designs than all previous methods, despite the fact that robots here have the potential to be much more complex, in terms of number of independent motors, than those in prior studies. We found that evolution reliably produces ``increasingly differentiable'' robots: body plans that smooth the loss landscape in which learning operates and thereby provide better training paths toward performant behaviors. Finally, one of the highly differentiable morphologies discovered in simulation was realized as a physical robot and shown to retain its optimized behavior. This provides a cyberphysical platform to investigate the relationship between evolution and learning in biological systems and broadens our understanding of how a robot's physical structure can influence the ability to train policies for it. Videos and code at https://sites.google.com/view/eldir.
Related papers
- DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - Innate Motivation for Robot Swarms by Minimizing Surprise: From Simple Simulations to Real-World Experiments [6.21540494241516]
Large-scale mobile multi-robot systems can be beneficial over monolithic robots because of higher potential for robustness and scalability.
Developing controllers for multi-robot systems is challenging because the multitude of interactions is hard to anticipate and difficult to model.
Innate motivation tries to avoid the specific formulation of rewards and work instead with different drivers, such as curiosity.
A unique advantage of the swarm robot case is that swarm members populate the robot's environment and can trigger more active behaviors in a self-referential loop.
arXiv Detail & Related papers (2024-05-04T06:25:58Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Universal Morphology Control via Contextual Modulation [52.742056836818136]
Learning a universal policy across different robot morphologies can significantly improve learning efficiency and generalization in continuous control.
Existing methods utilize graph neural networks or transformers to handle heterogeneous state and action spaces across different morphologies.
We propose a hierarchical architecture to better model this dependency via contextual modulation.
arXiv Detail & Related papers (2023-02-22T00:04:12Z) - Co-evolving morphology and control of soft robots using a single genome [0.0]
We present a new method to co-evolve morphology and control of robots.
Our method derives both the "brain" and the "body" of an agent from a single genome and develops them together.
We evaluate the presented methods on four tasks and observe that even if the search space was larger, having a single genome makes the evolution process converge faster.
arXiv Detail & Related papers (2022-12-22T07:34:31Z) - Severe Damage Recovery in Evolving Soft Robots through Differentiable
Programming [7.198483427085636]
We present a system based on neural cellular automata, in which locomoting robots are evolved and then given the ability to regenerate their morphology from damage through gradient-based training.
The resulting neural cellular automata are able to grow virtual robots capable of regaining more than 80% of their functionality, even after severe types of morphological damage.
arXiv Detail & Related papers (2022-06-14T08:05:42Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - Environmental Adaptation of Robot Morphology and Control through
Real-world Evolution [5.08706161686979]
We apply evolutionary search to yield combinations of morphology and control for our mechanically self-reconfiguring quadruped robot.
We evolve solutions on two distinct physical surfaces and analyze the results in terms of both control and morphology.
arXiv Detail & Related papers (2020-03-30T07:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.