Guided Navigation from Multiple Viewpoints using Qualitative Spatial
Reasoning
- URL: http://arxiv.org/abs/2011.01397v1
- Date: Tue, 3 Nov 2020 00:34:26 GMT
- Title: Guided Navigation from Multiple Viewpoints using Qualitative Spatial
Reasoning
- Authors: Danilo Perico and Paulo E. Santos and Reinaldo Bianchi
- Abstract summary: This work aims to develop algorithms capable of guiding a sensory deprived robot to a goal location.
The main task considered in this work is, given a group of autonomous agents, the development and evaluation of algorithms capable of producing a set of high-level commands.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Navigation is an essential ability for mobile agents to be completely
autonomous and able to perform complex actions. However, the problem of
navigation for agents with limited (or no) perception of the world, or devoid
of a fully defined motion model, has received little attention from research in
AI and Robotics. One way to tackle this problem is to use guided navigation, in
which other autonomous agents, endowed with perception, can combine their
distinct viewpoints to infer the localisation and the appropriate commands to
guide a sensory deprived agent through a particular path. Due to the limited
knowledge about the physical and perceptual characteristics of the guided
agent, this task should be conducted on a level of abstraction allowing the use
of a generic motion model, and high-level commands, that can be applied by any
type of autonomous agents, including humans. The main task considered in this
work is, given a group of autonomous agents perceiving their common environment
with their independent, egocentric and local vision sensors, the development
and evaluation of algorithms capable of producing a set of high-level commands
(involving qualitative directions: e.g. move left, go straight ahead) capable
of guiding a sensory deprived robot to a goal location.
Related papers
- Aligning Robot Navigation Behaviors with Human Intentions and Preferences [2.9914612342004503]
This dissertation aims to answer the question: "How can we use machine learning methods to align the navigational behaviors of autonomous mobile robots with human intentions and preferences?"
First, this dissertation introduces a new approach to learning navigation behaviors by imitating human-provided demonstrations of the intended navigation task.
Second, this dissertation introduces two algorithms to enhance terrain-aware off-road navigation for mobile robots by learning visual terrain awareness in a self-supervised manner.
arXiv Detail & Related papers (2024-09-16T03:45:00Z) - CoNav: A Benchmark for Human-Centered Collaborative Navigation [66.6268966718022]
We propose a collaborative navigation (CoNav) benchmark.
Our CoNav tackles the critical challenge of constructing a 3D navigation environment with realistic and diverse human activities.
We propose an intention-aware agent for reasoning both long-term and short-term human intention.
arXiv Detail & Related papers (2024-06-04T15:44:25Z) - FollowMe: a Robust Person Following Framework Based on Re-Identification
and Gestures [12.850149165791551]
Human-robot interaction (HRI) has become a crucial enabler in houses and industries for facilitating operational flexibility.
We developed a unified perception and navigation framework, which enables the robot to identify and follow a target person.
The Re-ID module can autonomously learn the features of a target person and use the acquired knowledge to visually re-identify the target.
arXiv Detail & Related papers (2023-11-21T20:59:27Z) - A Language Agent for Autonomous Driving [31.359413767191608]
We propose a paradigm shift to integrate human-like intelligence into autonomous driving systems.
Our approach, termed Agent-Driver, transforms the traditional autonomous driving pipeline by introducing a versatile tool library.
Powered by Large Language Models (LLMs), our Agent-Driver is endowed with intuitive common sense and robust reasoning capabilities.
arXiv Detail & Related papers (2023-11-17T18:59:56Z) - Emergence of Maps in the Memories of Blind Navigation Agents [68.41901534985575]
Animal navigation research posits that organisms build and maintain internal spatial representations, or maps, of their environment.
We ask if machines -- specifically, artificial intelligence (AI) navigation agents -- also build implicit (or'mental') maps.
Unlike animal navigation, we can judiciously design the agent's perceptual system and control the learning paradigm to nullify alternative navigation mechanisms.
arXiv Detail & Related papers (2023-01-30T20:09:39Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Towards self-attention based visual navigation in the real world [0.0]
Vision guided navigation requires processing complex visual information to inform task-orientated decisions.
Deep Reinforcement Learning agents trained in simulation often exhibit unsatisfactory results when deployed in the real-world.
This is the first demonstration of a self-attention based agent successfully trained in navigating a 3D action space, using less than 4000 parameters.
arXiv Detail & Related papers (2022-09-15T04:51:42Z) - Diagnosing Vision-and-Language Navigation: What Really Matters [61.72935815656582]
Vision-and-language navigation (VLN) is a multimodal task where an agent follows natural language instructions and navigates in visual environments.
Recent studies witness a slow-down in the performance improvements in both indoor and outdoor VLN tasks.
In this work, we conduct a series of diagnostic experiments to unveil agents' focus during navigation.
arXiv Detail & Related papers (2021-03-30T17:59:07Z) - Integrating Egocentric Localization for More Realistic Point-Goal
Navigation Agents [90.65480527538723]
We develop point-goal navigation agents that rely on visual estimates of egomotion under noisy action dynamics.
Our agent was the runner-up in the PointNav track of CVPR 2020 Habitat Challenge.
arXiv Detail & Related papers (2020-09-07T16:52:47Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - Improving Target-driven Visual Navigation with Attention on 3D Spatial
Relationships [52.72020203771489]
We investigate target-driven visual navigation using deep reinforcement learning (DRL) in 3D indoor scenes.
Our proposed method combines visual features and 3D spatial representations to learn navigation policy.
Our experiments, performed in the AI2-THOR, show that our model outperforms the baselines in both SR and SPL metrics.
arXiv Detail & Related papers (2020-04-29T08:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.