Monolithic vs. hybrid controller for multi-objective Sim-to-Real
learning
- URL: http://arxiv.org/abs/2108.07514v1
- Date: Tue, 17 Aug 2021 09:02:33 GMT
- Title: Monolithic vs. hybrid controller for multi-objective Sim-to-Real
learning
- Authors: Atakan Dag, Alexandre Angleraud, Wenyan Yang, Nataliya Strokina, Roel
S. Pieters, Minna Lanz, Joni-Kristian Kamarainen
- Abstract summary: Simulation to real (Sim-to-Real) is an attractive approach to construct controllers for robotic tasks.
In this work, we compare two approaches in the multi-objective setting of a robot manipulator to reach a target while avoiding an obstacle.
Our findings show that the training of a hybrid controller is easier and obtains a better success-failure trade-off than a monolithic controller.
- Score: 58.32117053812925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation to real (Sim-to-Real) is an attractive approach to construct
controllers for robotic tasks that are easier to simulate than to analytically
solve. Working Sim-to-Real solutions have been demonstrated for tasks with a
clear single objective such as "reach the target". Real world applications,
however, often consist of multiple simultaneous objectives such as "reach the
target" but "avoid obstacles". A straightforward solution in the context of
reinforcement learning (RL) is to combine multiple objectives into a multi-term
reward function and train a single monolithic controller. Recently, a hybrid
solution based on pre-trained single objective controllers and a switching rule
between them was proposed. In this work, we compare these two approaches in the
multi-objective setting of a robot manipulator to reach a target while avoiding
an obstacle. Our findings show that the training of a hybrid controller is
easier and obtains a better success-failure trade-off than a monolithic
controller. The controllers trained in simulator were verified by a real
set-up.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Learning to Fly in Seconds [7.259696592534715]
We show how curriculum learning and a highly optimized simulator enhance sample complexity and lead to fast training times.
Our framework enables Simulation-to-Reality (Sim2Real) transfer for direct control after only 18 seconds of training on a consumer-grade laptop.
arXiv Detail & Related papers (2023-11-22T01:06:45Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning [24.223788665601678]
Two xArm6 robots solve the U-shape assembly task with a success rate of above90% in simulation, and 50% on real hardware without any additional real-world fine-tuning.
Our results present a significant step forward for bi-arm capability on real hardware, and we hope our system can inspire future research on deep RL and Sim2Real transfer bi-manualpolicies.
arXiv Detail & Related papers (2023-03-27T01:25:24Z) - Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed
Environments [18.348489257164356]
We propose a system for efficient skill acquisition that leverages an object-centric generative model (OCGM) for versatile goal identification.
OCGM enables one-shot target object identification and re-identification in new scenes, allowing MP to guide the robot to the target object while avoiding obstacles.
arXiv Detail & Related papers (2023-03-06T18:49:59Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models [59.76233967614774]
We describe an approach for sim-to-real training that can accomplish unseen robotic tasks using models learned in simulation to ground components of a simple task planner.
We show an increase in success rate from 91.6% to 98% in simulation and from 10% to 80% success rate in the real-world as compared with naive baselines.
arXiv Detail & Related papers (2020-11-17T15:24:01Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.