Evolving Modular Soft Robots without Explicit Inter-Module Communication
using Local Self-Attention
- URL: http://arxiv.org/abs/2204.06481v1
- Date: Wed, 13 Apr 2022 16:03:39 GMT
- Title: Evolving Modular Soft Robots without Explicit Inter-Module Communication
using Local Self-Attention
- Authors: Federico Pigozzi and Yujin Tang and Eric Medvet and David Ha
- Abstract summary: We focus on Voxel-based Soft Robots (VSRs)
We use the same neural controller inside each voxel, but without any inter-voxel communication.
We show experimentally that the evolved robots are effective in the task of locomotion.
- Score: 9.503773054285556
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Modularity in robotics holds great potential. In principle, modular robots
can be disassembled and reassembled in different robots, and possibly perform
new tasks. Nevertheless, actually exploiting modularity is yet an unsolved
problem: controllers usually rely on inter-module communication, a practical
requirement that makes modules not perfectly interchangeable and thus limits
their flexibility. Here, we focus on Voxel-based Soft Robots (VSRs),
aggregations of mechanically identical elastic blocks. We use the same neural
controller inside each voxel, but without any inter-voxel communication, hence
enabling ideal conditions for modularity: modules are all equal and
interchangeable. We optimize the parameters of the neural controller-shared
among the voxels-by evolutionary computation. Crucially, we use a local
self-attention mechanism inside the controller to overcome the absence of
inter-module communication channels, thus enabling our robots to truly be
driven by the collective intelligence of their modules. We show experimentally
that the evolved robots are effective in the task of locomotion: thanks to
self-attention, instances of the same controller embodied in the same robot can
focus on different inputs. We also find that the evolved controllers generalize
to unseen morphologies, after a short fine-tuning, suggesting that an inductive
bias related to the task arises from true modularity.
Related papers
- Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - MeMo: Meaningful, Modular Controllers via Noise Injection [25.541496793132183]
We show that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers.
We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers.
We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer.
arXiv Detail & Related papers (2024-05-24T18:39:20Z) - Modular Controllers Facilitate the Co-Optimization of Morphology and
Control in Soft Robots [0.5076419064097734]
We show that modular controllers are more robust to changes to a robot's body plan.
Increased transferability of modular controllers to similar body plans enables more effective brain-body co-optimization of soft robots.
arXiv Detail & Related papers (2023-06-12T16:36:46Z) - Universal Morphology Control via Contextual Modulation [52.742056836818136]
Learning a universal policy across different robot morphologies can significantly improve learning efficiency and generalization in continuous control.
Existing methods utilize graph neural networks or transformers to handle heterogeneous state and action spaces across different morphologies.
We propose a hierarchical architecture to better model this dependency via contextual modulation.
arXiv Detail & Related papers (2023-02-22T00:04:12Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - MetaMorph: Learning Universal Controllers with Transformers [45.478223199658785]
In robotics we primarily train a single robot for a single task.
modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies.
We propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space.
arXiv Detail & Related papers (2022-03-22T17:58:31Z) - Malleable Agents for Re-Configurable Robotic Manipulators [0.0]
We propose an RL agent with sequence neural networks embedded in the deep neural network to adapt to robotic arms with varying number of links.
With the additional tool of domain randomization, this agent adapts to different configurations with varying number/length of links and dynamics noise.
arXiv Detail & Related papers (2022-02-04T21:22:00Z) - Versatile modular neural locomotion control with fast learning [6.85316573653194]
Legged robots have significant potential to operate in highly unstructured environments.
Currently, controllers must be either manually designed for specific robots or automatically designed via machine learning methods.
We propose a simple yet versatile modular neural control structure with fast learning.
arXiv Detail & Related papers (2021-07-16T12:12:28Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - Populations of Spiking Neurons for Reservoir Computing: Closed Loop
Control of a Compliant Quadruped [64.64924554743982]
We present a framework for implementing central pattern generators with spiking neural networks to obtain closed loop robot control.
We demonstrate the learning of predefined gait patterns, speed control and gait transition on a simulated model of a compliant quadrupedal robot.
arXiv Detail & Related papers (2020-04-09T14:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.