Dual-Arm Adversarial Robot Learning
- URL: http://arxiv.org/abs/2110.08066v1
- Date: Fri, 15 Oct 2021 12:51:57 GMT
- Title: Dual-Arm Adversarial Robot Learning
- Authors: Elie Aljalbout
- Abstract summary: We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
- Score: 0.6091702876917281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robot learning is a very promising topic for the future of automation and
machine intelligence. Future robots should be able to autonomously acquire
skills, learn to represent their environment, and interact with it. While these
topics have been explored in simulation, real-world robot learning research
seems to be still limited. This is due to the additional challenges encountered
in the real-world, such as noisy sensors and actuators, safe exploration,
non-stationary dynamics, autonomous environment resetting as well as the cost
of running experiments for long periods of time. Unless we develop scalable
solutions to these problems, learning complex tasks involving hand-eye
coordination and rich contacts will remain an untouched vision that is only
feasible in controlled lab environments. We propose dual-arm settings as
platforms for robot learning. Such settings enable safe data collection for
acquiring manipulation skills as well as training perception modules in a
robot-supervised manner. They also ease the processes of resetting the
environment. Furthermore, adversarial learning could potentially boost the
generalization capability of robot learning methods by maximizing the
exploration based on game-theoretic objectives while ensuring safety based on
collaborative task spaces. In this paper, we will discuss the potential
benefits of this setup as well as the challenges and research directions that
can be pursued.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Growing from Exploration: A self-exploring framework for robots based on
foundation models [13.250831101705694]
We propose a framework named GExp, which enables robots to explore and learn autonomously without human intervention.
Inspired by the way that infants interact with the world, GExp encourages robots to understand and explore the environment with a series of self-generated tasks.
arXiv Detail & Related papers (2024-01-24T14:04:08Z) - Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics [11.946807588018595]
This paper presents a unified model-based reinforcement learning framework that bridges active exploration and uncertainty-aware deployment.
The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC.
We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
arXiv Detail & Related papers (2023-05-20T17:20:12Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Auditing Robot Learning for Safety and Compliance during Deployment [4.742825811314168]
We study how best to audit robot learning algorithms for checking their compatibility with humans.
We believe that this is a challenging problem that will require efforts from the entire robot learning community.
arXiv Detail & Related papers (2021-10-12T02:40:11Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Low Dimensional State Representation Learning with Robotics Priors in
Continuous Action Spaces [8.692025477306212]
Reinforcement learning algorithms have proven to be capable of solving complicated robotics tasks in an end-to-end fashion.
We propose a framework combining the learning of a low-dimensional state representation, from high-dimensional observations coming from the robot's raw sensory readings, with the learning of the optimal policy.
arXiv Detail & Related papers (2021-07-04T15:42:01Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.