Automated Gait Generation For Walking, Soft Robotic Quadrupeds
- URL: http://arxiv.org/abs/2310.00498v1
- Date: Sat, 30 Sep 2023 21:31:30 GMT
- Title: Automated Gait Generation For Walking, Soft Robotic Quadrupeds
- Authors: Jake Ketchum, Sophia Schiffer, Muchen Sun, Pranav Kaarthik, Ryan L.
Truby, Todd D. Murphey
- Abstract summary: Gait generation for soft robots is challenging due to the nonlinear dynamics and high dimensional input spaces of soft actuators.
We present a sample-efficient, simulation free, method for self-generating soft robot gaits.
This is the first demonstration of completely autonomous gait generation in a soft robot.
- Score: 6.005998680766498
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gait generation for soft robots is challenging due to the nonlinear dynamics
and high dimensional input spaces of soft actuators. Limitations in soft
robotic control and perception force researchers to hand-craft open loop
controllers for gait sequences, which is a non-trivial process. Moreover, short
soft actuator lifespans and natural variations in actuator behavior limit
machine learning techniques to settings that can be learned on the same time
scales as robot deployment. Lastly, simulation is not always possible, due to
heterogeneity and nonlinearity in soft robotic materials and their dynamics
change due to wear. We present a sample-efficient, simulation free, method for
self-generating soft robot gaits, using very minimal computation. This
technique is demonstrated on a motorized soft robotic quadruped that walks
using four legs constructed from 16 ``handed shearing auxetic" (HSA) actuators.
To manage the dimension of the search space, gaits are composed of two
sequential sets of leg motions selected from 7 possible primitives. Pairs of
primitives are executed on one leg at a time; we then select the
best-performing pair to execute while moving on to subsequent legs. This method
-- which uses no simulation, sophisticated computation, or user input --
consistently generates good translation and rotation gaits in as low as 4
minutes of hardware experimentation, outperforming hand-crafted gaits. This is
the first demonstration of completely autonomous gait generation in a soft
robot.
Related papers
- Knowledge-based Neural Ordinary Differential Equations for Cosserat Rod-based Soft Robots [10.511173252165287]
It is difficult to model the dynamics of soft robots due to their high spatial dimensionality.
Deep learning algorithms have shown promises in data-driven modeling of soft robots.
We propose KNODE-Cosserat, a framework that combines first-principle physics models and neural ordinary differential equations.
arXiv Detail & Related papers (2024-08-14T19:07:28Z) - DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - Learning Quadruped Locomotion Using Differentiable Simulation [31.80380408663424]
Differentiable simulation promises fast convergence and stable training.
This work proposes a new differentiable simulation framework to overcome these challenges.
Our framework enables learning quadruped walking in simulation in minutes without parallelization.
arXiv Detail & Related papers (2024-03-21T22:18:59Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Robustness for Free: Quality-Diversity Driven Discovery of Agile Soft
Robotic Gaits [0.7829600874436199]
We show how Quality Diversity Algorithms can produce repertoires of gaits robust to changing terrains.
This robustness significantly out-performs that of gaits produced by a single objective optimization algorithm.
arXiv Detail & Related papers (2023-11-02T14:00:11Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning [8.365690203298966]
We report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning.
A manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
We constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor.
arXiv Detail & Related papers (2021-03-17T02:11:58Z) - Behavioral Repertoires for Soft Tensegrity Robots [0.0]
Mobile soft robots offer compelling applications in fields ranging from urban search and rescue to planetary exploration.
A critical challenge of soft robotic control is that the nonlinear dynamics imposed by soft materials often result in complex behaviors that are counterintuitive and hard to model or predict.
In this work we employ a Quality Diversity Algorithm running model-free on a physical soft tensegrity robot that autonomously generates a behavioral repertoire with no priori knowledge of the robot dynamics, and minimal human intervention.
arXiv Detail & Related papers (2020-09-23T00:09:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.