MeMo: Meaningful, Modular Controllers via Noise Injection
- URL: http://arxiv.org/abs/2407.01567v1
- Date: Fri, 24 May 2024 18:39:20 GMT
- Title: MeMo: Meaningful, Modular Controllers via Noise Injection
- Authors: Megan Tjandrasuwita, Jie Xu, Armando Solar-Lezama, Wojciech Matusik,
- Abstract summary: We show that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers.
We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers.
We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer.
- Score: 25.541496793132183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots are often built from standardized assemblies, (e.g. arms, legs, or fingers), but each robot must be trained from scratch to control all the actuators of all the parts together. In this paper we demonstrate a new approach that takes a single robot and its controller as input and produces a set of modular controllers for each of these assemblies such that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers. We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers. Specifically, we propose a novel modularity objective to learn an appropriate division of labor among the modules. We demonstrate that this objective can be optimized simultaneously with standard behavior cloning loss via noise injection. We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer. We also show that the modules help in task transfer. On both structure and task transfer, MeMo achieves improved training efficiency to graph neural network and Transformer baselines.
Related papers
- One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion [18.556470359899855]
We introduce URMA, the Unified Robot Morphology Architecture.
Our framework brings the end-to-end Multi-Task Reinforcement Learning approach to the realm of legged robots.
We show that URMA can learn a locomotion policy on multiple embodiments that can be easily transferred to unseen robot platforms.
arXiv Detail & Related papers (2024-09-10T09:44:15Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Modular Controllers Facilitate the Co-Optimization of Morphology and
Control in Soft Robots [0.5076419064097734]
We show that modular controllers are more robust to changes to a robot's body plan.
Increased transferability of modular controllers to similar body plans enables more effective brain-body co-optimization of soft robots.
arXiv Detail & Related papers (2023-06-12T16:36:46Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - Evolving Modular Soft Robots without Explicit Inter-Module Communication
using Local Self-Attention [9.503773054285556]
We focus on Voxel-based Soft Robots (VSRs)
We use the same neural controller inside each voxel, but without any inter-voxel communication.
We show experimentally that the evolved robots are effective in the task of locomotion.
arXiv Detail & Related papers (2022-04-13T16:03:39Z) - MetaMorph: Learning Universal Controllers with Transformers [45.478223199658785]
In robotics we primarily train a single robot for a single task.
modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies.
We propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space.
arXiv Detail & Related papers (2022-03-22T17:58:31Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - Versatile modular neural locomotion control with fast learning [6.85316573653194]
Legged robots have significant potential to operate in highly unstructured environments.
Currently, controllers must be either manually designed for specific robots or automatically designed via machine learning methods.
We propose a simple yet versatile modular neural control structure with fast learning.
arXiv Detail & Related papers (2021-07-16T12:12:28Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.