Autonomous Warehouse Robot using Deep Q-Learning
- URL: http://arxiv.org/abs/2202.10019v1
- Date: Mon, 21 Feb 2022 07:16:51 GMT
- Title: Autonomous Warehouse Robot using Deep Q-Learning
- Authors: Ismot Sadik Peyas, Zahid Hasan, Md. Rafat Rahman Tushar, Al Musabbir,
Raisa Mehjabin Azni, Shahnewaz Siddique
- Abstract summary: In warehouses, specialized agents need to navigate, avoid obstacles and maximize the use of space.
We propose using Deep Reinforcement Learning (DRL) to address the robot navigation and obstacle avoidance problem.
We use a strategic variation of Q-tables to perform multi-agent Q-learning.
- Score: 0.5138012450471438
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In warehouses, specialized agents need to navigate, avoid obstacles and
maximize the use of space in the warehouse environment. Due to the
unpredictability of these environments, reinforcement learning approaches can
be applied to complete these tasks. In this paper, we propose using Deep
Reinforcement Learning (DRL) to address the robot navigation and obstacle
avoidance problem and traditional Q-learning with minor variations to maximize
the use of space for product placement. We first investigate the problem for
the single robot case. Next, based on the single robot model, we extend our
system to the multi-robot case. We use a strategic variation of Q-tables to
perform multi-agent Q-learning. We successfully test the performance of our
model in a 2D simulation environment for both the single and multi-robot cases.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation [8.940998315746684]
We propose a model-based reinforcement learning (RL) approach for robotic arm end-tasks.
We employ Bayesian neural network models to represent, in a probabilistic way, both the belief and information encoded in the dynamic model during exploration.
Our experiments show the advantages of our Bayesian model-based RL approach, with similar quality in the results than relevant alternatives.
arXiv Detail & Related papers (2024-04-02T11:44:37Z) - Toward General-Purpose Robots via Foundation Models: A Survey and
Meta-Analysis [73.89558418030418]
Most existing robotic systems have been designed for specific tasks, trained on specific datasets, and deployed within specific environments.
Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models, we devote this survey to exploring how foundation models can be applied to robotics.
arXiv Detail & Related papers (2023-12-14T10:02:55Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Decentralized Multi-Robot Formation Control Using Reinforcement Learning [2.7716102039510564]
This paper presents a decentralized leader-follower multi-robot formation control based on a reinforcement learning (RL) algorithm applied to a swarm of small educational Sphero robots.
To enhance the system behavior, we trained two different DDQN models, one for reaching the formation and the other for maintaining it.
The presented approach has been tested in simulation and real experiments which show that the multi-robot system can achieve and maintain a stable formation without the need for complex mathematical models and nonlinear control laws.
arXiv Detail & Related papers (2023-06-26T08:02:55Z) - Towards Practical Multi-Robot Hybrid Tasks Allocation for Autonomous
Cleaning [40.715435411065336]
We formulate multi-robot hybrid-task allocation under the uncertain cleaning environment as a robust optimization problem.
We establish a dataset of emph100 instances made from floor plans, each of which has 2D manually-labeled images and a 3D model.
We provide comprehensive results on the collected dataset using three traditional optimization approaches and a deep reinforcement learning-based solver.
arXiv Detail & Related papers (2023-03-12T01:15:08Z) - Domain Randomization for Robust, Affordable and Effective Closed-loop
Control of Soft Robots [10.977130974626668]
Soft robots are gaining popularity thanks to their intrinsic safety to contacts and adaptability.
We show how Domain Randomization (DR) can solve this problem by enhancing RL policies for soft robots.
We introduce a novel algorithmic extension to previous adaptive domain randomization methods for the automatic inference of dynamics parameters for deformable objects.
arXiv Detail & Related papers (2023-03-07T18:50:00Z) - Tiny Robot Learning: Challenges and Directions for Machine Learning in
Resource-Constrained Robots [57.27442333662654]
Machine learning (ML) has become a pervasive tool across computing systems.
Tiny robot learning is the deployment of ML on resource-constrained low-cost autonomous robots.
Tiny robot learning is subject to challenges from size, weight, area, and power (SWAP) constraints.
This paper gives a brief survey of the tiny robot learning space, elaborates on key challenges, and proposes promising opportunities for future work in ML system design.
arXiv Detail & Related papers (2022-05-11T19:36:15Z) - Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning [23.164743388342803]
We study how to solve bi-manual tasks using reinforcement learning trained in simulation.
We also discuss modifications to our simulated environment which lead to effective training of RL policies.
In this work, we design a Connect Task, where the aim is for two robot arms to pick up and attach two blocks with magnetic connection points.
arXiv Detail & Related papers (2022-03-15T21:49:20Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Large Scale Distributed Collaborative Unlabeled Motion Planning with
Graph Policy Gradients [122.85280150421175]
We present a learning method to solve the unlabelled motion problem with motion constraints and space constraints in 2D space for a large number of robots.
We employ a graph neural network (GNN) to parameterize policies for the robots.
arXiv Detail & Related papers (2021-02-11T21:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.