Mobile Robot Planner with Low-cost Cameras Using Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2012.11160v1
- Date: Mon, 21 Dec 2020 07:30:04 GMT
- Title: Mobile Robot Planner with Low-cost Cameras Using Deep Reinforcement
Learning
- Authors: Minh Q. Tran, Ngoc Q. Ly
- Abstract summary: This study develops a robot mobility policy based on deep reinforcement learning.
In order to bring robots to market, low-cost mass production is also an issue that needs to be addressed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study develops a robot mobility policy based on deep reinforcement
learning. Since traditional methods of conventional robotic navigation depend
on accurate map reproduction as well as require high-end sensors,
learning-based methods are positive trends, especially deep reinforcement
learning. The problem is modeled in the form of a Markov Decision Process (MDP)
with the agent being a mobile robot. Its state of view is obtained by the input
sensors such as laser findings or cameras and the purpose is navigating to the
goal without any collision. There have been many deep learning methods that
solve this problem. However, in order to bring robots to market, low-cost mass
production is also an issue that needs to be addressed. Therefore, this work
attempts to construct a pseudo laser findings system based on direct depth
matrix prediction from a single camera image while still retaining stable
performances. Experiment results show that they are directly comparable with
others using high-priced sensors.
Related papers
- Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Markerless Camera-to-Robot Pose Estimation via Self-supervised
Sim-to-Real Transfer [26.21320177775571]
We propose an end-to-end pose estimation framework that is capable of online camera-to-robot calibration and a self-supervised training method.
Our framework combines deep learning and geometric vision for solving the robot pose, and the pipeline is fully differentiable.
arXiv Detail & Related papers (2023-02-28T05:55:42Z) - Image-based Pose Estimation and Shape Reconstruction for Robot
Manipulators and Soft, Continuum Robots via Differentiable Rendering [20.62295718847247]
State estimation from measured data is crucial for robotic applications as autonomous systems rely on sensors to capture the motion and localize in the 3D world.
In this work, we achieve image-based robot pose estimation and shape reconstruction from camera images.
We demonstrate that our method of using geometrical shape primitives can achieve high accuracy in shape reconstruction for a soft continuum robot and pose estimation for a robot manipulator.
arXiv Detail & Related papers (2023-02-27T18:51:29Z) - Learning Active Camera for Multi-Object Navigation [94.89618442412247]
Getting robots to navigate to multiple objects autonomously is essential yet difficult in robot applications.
Existing navigation methods mainly focus on fixed cameras and few attempts have been made to navigate with active cameras.
In this paper, we consider navigating to multiple objects more efficiently with active cameras.
arXiv Detail & Related papers (2022-10-14T04:17:30Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - Robot Localization and Navigation through Predictive Processing using
LiDAR [0.0]
We show a proof-of-concept of the predictive processing-inspired approach to perception applied for localization and navigation using laser sensors.
We learn the generative model of the laser through self-supervised learning and perform both online state-estimation and navigation.
Results showed improved state-estimation performance when comparing to a state-of-the-art particle filter in the absence of odometry.
arXiv Detail & Related papers (2021-09-09T09:58:00Z) - Autonomous Navigation in Dynamic Environments: Deep Learning-Based
Approach [0.0]
This thesis studies different deep learning-based approaches, highlighting the advantages and disadvantages of each scheme.
One of the deep learning methods based on convolutional neural network (CNN) is realized by software implementations.
We propose a low-cost approach, for indoor applications such as restaurants, museums, etc, on the base of using a monocular camera instead of a laser scanner.
arXiv Detail & Related papers (2021-02-03T23:20:20Z) - Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments [0.0]
We present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
The input for the robot is only the fused data from a 2D laser scanner and a RGB-D camera as well as the orientation to the goal.
The output actions of an Asynchronous Advantage Actor-Critic network (GA3C) are the linear and angular velocities for the robot.
arXiv Detail & Related papers (2020-05-28T09:15:14Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.