DRL: Deep Reinforcement Learning for Intelligent Robot Control --
Concept, Literature, and Future
- URL: http://arxiv.org/abs/2105.13806v1
- Date: Tue, 20 Apr 2021 15:26:10 GMT
- Title: DRL: Deep Reinforcement Learning for Intelligent Robot Control --
Concept, Literature, and Future
- Authors: Aras Dargazany
- Abstract summary: Combination of machine learning, computer vision, and robotic systems motivates this work toward proposing a vision-based learning framework for intelligent robot control as the ultimate goal (vision-based learning robot)
This work specifically introduces deep reinforcement learning as the the learning framework, a General-purpose framework for AI (AGI) meaning application-independent and platform-independent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combination of machine learning (for generating machine intelligence),
computer vision (for better environment perception), and robotic systems (for
controlled environment interaction) motivates this work toward proposing a
vision-based learning framework for intelligent robot control as the ultimate
goal (vision-based learning robot). This work specifically introduces deep
reinforcement learning as the the learning framework, a General-purpose
framework for AI (AGI) meaning application-independent and
platform-independent. In terms of robot control, this framework is proposing
specifically a high-level control architecture independent of the low-level
control, meaning these two required level of control can be developed
separately from each other. In this aspect, the high-level control creates the
required intelligence for the control of the platform using the recorded
low-level controlling data from that same platform generated by a trainer. The
recorded low-level controlling data is simply indicating the successful and
failed experiences or sequences of experiments conducted by a trainer using the
same robotic platform. The sequences of the recorded data are composed of
observation data (input sensor), generated reward (feedback value) and action
data (output controller). For experimental platform and experiments, vision
sensors are used for perception of the environment, different kinematic
controllers create the required motion commands based on the platform
application, deep learning approaches generate the required intelligence, and
finally reinforcement learning techniques incrementally improve the generated
intelligence until the mission is accomplished by the robot.
Related papers
- RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - A Reinforcement Learning Approach for Robotic Unloading from Visual
Observations [1.420663986837751]
In this work, we focus on a robotic unloading problem from visual observations.
We propose a hierarchical controller structure that combines a high-level decision-making module with classical motion control.
Our experiments demonstrate that both these elements play a crucial role in achieving improved learning performance.
arXiv Detail & Related papers (2023-09-12T22:22:28Z) - Using Knowledge Representation and Task Planning for Robot-agnostic
Skills on the Example of Contact-Rich Wiping Tasks [44.99833362998488]
We show how a single robot skill that utilizes knowledge representation, task planning, and automatic selection of skill implementations can be executed in different contexts.
We demonstrate how the skill-based control platform enables this with contact-rich wiping tasks on different robot systems.
arXiv Detail & Related papers (2023-08-27T21:17:32Z) - Exploring Visual Pre-training for Robot Manipulation: Datasets, Models
and Methods [14.780597545674157]
We investigate the effects of visual pre-training strategies on robot manipulation tasks from three fundamental perspectives.
We propose a visual pre-training scheme for robot manipulation termed Vi-PRoM, which combines self-supervised learning and supervised learning.
arXiv Detail & Related papers (2023-08-07T14:24:52Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Learning to Fly -- a Gym Environment with PyBullet Physics for
Reinforcement Learning of Multi-agent Quadcopter Control [0.0]
We propose an open-source environment for multiple quadcopters based on the Bullet physics engine.
Its multi-agent and vision based reinforcement learning interfaces, as well as the support of realistic collisions and aerodynamic effects, make it, to the best of our knowledge, a first of its kind.
arXiv Detail & Related papers (2021-03-03T02:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.