Autonomous Navigation and Collision Avoidance for Mobile Robots: Classification and Review
- URL: http://arxiv.org/abs/2410.07297v1
- Date: Wed, 9 Oct 2024 16:56:42 GMT
- Title: Autonomous Navigation and Collision Avoidance for Mobile Robots: Classification and Review
- Authors: Marcus Vinicius Leal de Carvalho, Roberto Simoni, Leopoldo Yoshioka,
- Abstract summary: This paper introduces a novel classification for Autonomous Mobile Robots (AMRs)
It focuses on autonomous collision-free navigation and presents the main methods and widely accepted technologies for each phase of the proposed classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel classification for Autonomous Mobile Robots (AMRs), into three phases and five steps, focusing on autonomous collision-free navigation. Additionally, it presents the main methods and widely accepted technologies for each phase of the proposed classification. The purpose of this classification is to facilitate understanding and establish connections between the independent input variables of the system (hardware, software) and autonomous navigation. By analyzing well-established technologies in terms of sensors and methods used for autonomous navigation, this paper aims to provide a foundation of knowledge that can be applied in future projects of mobile robots.
Related papers
- Aligning Robot Navigation Behaviors with Human Intentions and Preferences [2.9914612342004503]
This dissertation aims to answer the question: "How can we use machine learning methods to align the navigational behaviors of autonomous mobile robots with human intentions and preferences?"
First, this dissertation introduces a new approach to learning navigation behaviors by imitating human-provided demonstrations of the intended navigation task.
Second, this dissertation introduces two algorithms to enhance terrain-aware off-road navigation for mobile robots by learning visual terrain awareness in a self-supervised manner.
arXiv Detail & Related papers (2024-09-16T03:45:00Z) - Robot Navigation with Entity-Based Collision Avoidance using Deep Reinforcement Learning [0.0]
We present a novel methodology that enhances the robot's interaction with different types of agents and obstacles.
This approach uses information about the entity types, improving collision avoidance and ensuring safer navigation.
We introduce a new reward function that penalizes the robot for collisions with different entities such as adults, bicyclists, children, and static obstacles.
arXiv Detail & Related papers (2024-08-26T11:16:03Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Recent Advancements in Deep Learning Applications and Methods for
Autonomous Navigation: A Comprehensive Review [0.0]
Review article is an attempt to survey all recent AI based techniques used to deal with major functions.
Paper aims to bridge the gap between autonomous navigation and deep learning.
arXiv Detail & Related papers (2023-02-22T01:42:49Z) - Semantic-Aware Environment Perception for Mobile Human-Robot Interaction [2.309914459672557]
We present a vision-based system for mobile robots to enable a semantic-aware environment without additional a-priori knowledge.
We deploy our system on a mobile humanoid robot that enables us to test our methods in real-world applications.
arXiv Detail & Related papers (2022-11-07T08:49:45Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of
Demonstrations for Social Navigation [92.66286342108934]
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a'socially compliant' manner in the presence of other intelligent agents such as humans.
Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human teleoperated driving demonstrations.
arXiv Detail & Related papers (2022-03-28T19:09:11Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - BEyond observation: an approach for ObjectNav [0.0]
We present our exploratory research of how sensor data fusion and state-of-the-art machine learning algorithms can perform the Embodied Artificial Intelligence (E-AI) task called Visual Semantic Navigation.
This task consists of autonomous navigation using egocentric visual observations to reach an object belonging to the target semantic class without prior knowledge of the environment.
Our method reached fourth place on the Habitat Challenge 2021 ObjectNav on the Minival phase and the Test-Standard Phase.
arXiv Detail & Related papers (2021-06-21T19:27:16Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - APPLD: Adaptive Planner Parameter Learning from Demonstration [48.63930323392909]
We introduce APPLD, Adaptive Planner Learning from Demonstration, that allows existing navigation systems to be successfully applied to new complex environments.
APPLD is verified on two robots running different navigation systems in different environments.
Experimental results show that APPLD can outperform navigation systems with the default and expert-tuned parameters, and even the human demonstrator themselves.
arXiv Detail & Related papers (2020-03-31T21:15:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.