OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation
- URL: http://arxiv.org/abs/2110.00704v1
- Date: Sat, 2 Oct 2021 01:21:38 GMT
- Title: OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation
- Authors: Josiah Wong, Viktor Makoviychuk, Anima Anandkumar, Yuke Zhu
- Abstract summary: Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
- Score: 50.59541802645156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning performant robot manipulation policies can be challenging due to
high-dimensional continuous actions and complex physics-based dynamics. This
can be alleviated through intelligent choice of action space. Operational Space
Control (OSC) has been used as an effective task-space controller for
manipulation. Nonetheless, its strength depends on the underlying modeling
fidelity, and is prone to failure when there are modeling errors. In this work,
we propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of
OSC that compensates for modeling errors by inferring relevant dynamics
parameters from online trajectories. OSCAR decomposes dynamics learning into
task-agnostic and task-specific phases, decoupling the dynamics dependencies of
the robot and the extrinsics due to its environment. This structure enables
robust zero-shot performance under out-of-distribution and rapid adaptation to
significant domain shifts through additional finetuning. We evaluate our method
on a variety of simulated manipulation problems, and find substantial
improvements over an array of controller baselines. For more results and
information, please visit https://cremebrule.github.io/oscar-web/.
Related papers
- Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Context-Conditional Navigation with a Learning-Based Terrain- and Robot-Aware Dynamics Model [11.800678688260081]
We develop a novel probabilistic, terrain- and robot-aware forward dynamics model, termed TRADYN.
We evaluate our method in a simulated 2D navigation setting with a unicycle-like robot and different terrain layouts with spatially varying friction coefficients.
arXiv Detail & Related papers (2023-07-18T12:42:59Z) - Domain Randomization for Robust, Affordable and Effective Closed-loop
Control of Soft Robots [10.977130974626668]
Soft robots are gaining popularity thanks to their intrinsic safety to contacts and adaptability.
We show how Domain Randomization (DR) can solve this problem by enhancing RL policies for soft robots.
We introduce a novel algorithmic extension to previous adaptive domain randomization methods for the automatic inference of dynamics parameters for deformable objects.
arXiv Detail & Related papers (2023-03-07T18:50:00Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - Meta-Reinforcement Learning for Adaptive Motor Control in Changing Robot
Dynamics and Environments [3.5309638744466167]
This work developed a meta-learning approach that adapts the control policy on the fly to different changing conditions for robust locomotion.
The proposed method constantly updates the interaction model, samples feasible sequences of actions of estimated the state-action trajectories, and then applies the optimal actions to maximize the reward.
arXiv Detail & Related papers (2021-01-19T12:57:12Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Smooth Exploration for Robotic Reinforcement Learning [11.215352918313577]
Reinforcement learning (RL) enables robots to learn skills from interactions with the real world.
In practice, the unstructured step-based exploration used in Deep RL leads to jerky motion patterns on real robots.
We address these issues by adapting state-dependent exploration (SDE) to current Deep RL algorithms.
arXiv Detail & Related papers (2020-05-12T12:28:25Z) - Model-Based Meta-Reinforcement Learning for Flight with Suspended
Payloads [69.21503033239985]
Transporting suspended payloads is challenging for autonomous aerial vehicles.
We propose a meta-learning approach that "learns how to learn" models of altered dynamics within seconds of post-connection flight data.
arXiv Detail & Related papers (2020-04-23T17:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.