Low Dimensional State Representation Learning with Robotics Priors in
Continuous Action Spaces
- URL: http://arxiv.org/abs/2107.01667v1
- Date: Sun, 4 Jul 2021 15:42:01 GMT
- Title: Low Dimensional State Representation Learning with Robotics Priors in
Continuous Action Spaces
- Authors: Nicol\`o Botteghi, Khaled Alaa, Mannes Poel, Beril Sirmacek, Christoph
Brune, Abeje Mersha, Stefano Stramigioli
- Abstract summary: Reinforcement learning algorithms have proven to be capable of solving complicated robotics tasks in an end-to-end fashion.
We propose a framework combining the learning of a low-dimensional state representation, from high-dimensional observations coming from the robot's raw sensory readings, with the learning of the optimal policy.
- Score: 8.692025477306212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous robots require high degrees of cognitive and motoric intelligence
to come into our everyday life. In non-structured environments and in the
presence of uncertainties, such degrees of intelligence are not easy to obtain.
Reinforcement learning algorithms have proven to be capable of solving
complicated robotics tasks in an end-to-end fashion without any need for
hand-crafted features or policies. Especially in the context of robotics, in
which the cost of real-world data is usually extremely high, reinforcement
learning solutions achieving high sample efficiency are needed. In this paper,
we propose a framework combining the learning of a low-dimensional state
representation, from high-dimensional observations coming from the robot's raw
sensory readings, with the learning of the optimal policy, given the learned
state representation. We evaluate our framework in the context of mobile robot
navigation in the case of continuous state and action spaces. Moreover, we
study the problem of transferring what learned in the simulated virtual
environment to the real robot without further retraining using real-world data
in the presence of visual and depth distractors, such as lighting changes and
moving obstacles.
Related papers
- A Retrospective on the Robot Air Hockey Challenge: Benchmarking Robust, Reliable, and Safe Learning Techniques for Real-world Robotics [53.33976793493801]
We organized the Robot Air Hockey Challenge at the NeurIPS 2023 conference.
We focus on practical challenges in robotics, such as the sim-to-real gap, low-level control issues, safety problems, real-time requirements, and the limited availability of real-world data.
Results show that solutions combining learning-based approaches with prior knowledge outperform those relying solely on data when real-world deployment is challenging.
arXiv Detail & Related papers (2024-11-08T17:20:47Z) - Grounding Robot Policies with Visuomotor Language Guidance [15.774237279917594]
We propose an agent-based framework for grounding robot policies to the current context.
The proposed framework is composed of a set of conversational agents designed for specific roles.
We demonstrate that our approach can effectively guide manipulation policies to achieve significantly higher success rates.
arXiv Detail & Related papers (2024-10-09T02:00:37Z) - Autonomous Robotic Reinforcement Learning with Asynchronous Human
Feedback [27.223725464754853]
GEAR enables robots to be placed in real-world environments and left to train autonomously without interruption.
System streams robot experience to a web interface only requiring occasional asynchronous feedback from remote, crowdsourced, non-expert humans.
arXiv Detail & Related papers (2023-10-31T16:43:56Z) - Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics [11.946807588018595]
This paper presents a unified model-based reinforcement learning framework that bridges active exploration and uncertainty-aware deployment.
The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC.
We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
arXiv Detail & Related papers (2023-05-20T17:20:12Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Low Dimensional State Representation Learning with Reward-shaped Priors [7.211095654886105]
We propose a method that aims at learning a mapping from the observations into a lower-dimensional state space.
This mapping is learned with unsupervised learning using loss functions shaped to incorporate prior knowledge of the environment and the task.
We test the method on several mobile robot navigation tasks in a simulation environment and also on a real robot.
arXiv Detail & Related papers (2020-07-29T13:00:39Z) - Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera [58.720142291102135]
We use a simulator to learn the peg-hole insertion problem and then transfer the learned model to the real robot.
We show that the transferred policy, which only takes RGB-D and joint information (proprioception) can perform well on the real robot.
arXiv Detail & Related papers (2020-05-29T05:58:54Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.