Multi-Agent Embodied Visual Semantic Navigation with Scene Prior
Knowledge
- URL: http://arxiv.org/abs/2109.09531v1
- Date: Mon, 20 Sep 2021 13:31:03 GMT
- Title: Multi-Agent Embodied Visual Semantic Navigation with Scene Prior
Knowledge
- Authors: Xinzhu Liu, Di Guo, Huaping Liu, and Fuchun Sun
- Abstract summary: In visual semantic navigation, the robot navigates to a target object with egocentric visual observations and the class label of the target is given.
Most of the existing models are only effective for single-agent navigation, and a single agent has low efficiency and poor fault tolerance when completing more complicated tasks.
We propose the multi-agent visual semantic navigation, in which multiple agents collaborate with others to find multiple target objects.
- Score: 42.37872230561632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In visual semantic navigation, the robot navigates to a target object with
egocentric visual observations and the class label of the target is given. It
is a meaningful task inspiring a surge of relevant research. However, most of
the existing models are only effective for single-agent navigation, and a
single agent has low efficiency and poor fault tolerance when completing more
complicated tasks. Multi-agent collaboration can improve the efficiency and has
strong application potentials. In this paper, we propose the multi-agent visual
semantic navigation, in which multiple agents collaborate with others to find
multiple target objects. It is a challenging task that requires agents to learn
reasonable collaboration strategies to perform efficient exploration under the
restrictions of communication bandwidth. We develop a hierarchical decision
framework based on semantic mapping, scene prior knowledge, and communication
mechanism to solve this task. The results of testing experiments in unseen
scenes with both known objects and unknown objects illustrate the higher
accuracy and efficiency of the proposed model compared with the single-agent
model.
Related papers
- DISCO: Embodied Navigation and Interaction via Differentiable Scene Semantics and Dual-level Control [53.80518003412016]
Building a general-purpose intelligent home-assistant agent skilled in diverse tasks by human commands is a long-term blueprint of embodied AI research.
We study primitive mobile manipulations for embodied agents, i.e. how to navigate and interact based on an instructed verb-noun pair.
We propose DISCO, which features non-trivial advancements in contextualized scene modeling and efficient controls.
arXiv Detail & Related papers (2024-07-20T05:39:28Z) - Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight Detection [7.864892339833315]
We propose a novel task-driven top-down framework for joint moment retrieval and highlight detection.
The framework introduces a task-decoupled unit to capture task-specific and common representations.
Comprehensive experiments and in-depth ablation studies on QVHighlights, TVSum, and Charades-STA datasets corroborate the effectiveness and flexibility of the proposed framework.
arXiv Detail & Related papers (2024-04-14T14:06:42Z) - Attention Graph for Multi-Robot Social Navigation with Deep
Reinforcement Learning [0.0]
We present MultiSoc, a new method for learning multi-agent socially aware navigation strategies using deep reinforcement learning (RL)
Inspired by recent works on multi-agent deep RL, our method leverages graph-based representation of agent interactions, combining the positions and fields of view of entities (pedestrians and agents)
Our method learns faster than social navigation deep RL mono-agent techniques, and enables efficient multi-agent implicit coordination in challenging crowd navigation with multiple heterogeneous humans.
arXiv Detail & Related papers (2024-01-31T15:24:13Z) - Heterogeneous Embodied Multi-Agent Collaboration [21.364827833498254]
Heterogeneous multi-agent tasks are common in real-world scenarios.
We propose the heterogeneous multi-agent tidying-up task, in which multiple heterogeneous agents collaborate to detect misplaced objects and place them in reasonable locations.
We propose the hierarchical decision model based on misplaced object detection, reasonable receptacle prediction, as well as the handshake-based group communication mechanism.
arXiv Detail & Related papers (2023-07-26T04:33:05Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Zero Experience Required: Plug & Play Modular Transfer Learning for
Semantic Visual Navigation [97.17517060585875]
We present a unified approach to visual navigation using a novel modular transfer learning model.
Our model can effectively leverage its experience from one source task and apply it to multiple target tasks.
Our approach learns faster, generalizes better, and outperforms SoTA models by a significant margin.
arXiv Detail & Related papers (2022-02-05T00:07:21Z) - Multi-target tracking for video surveillance using deep affinity
network: a brief review [0.0]
Multi-target tracking (MTT) for video surveillance is one of the important and challenging tasks.
Deep learning models are known to function like the human brain.
arXiv Detail & Related papers (2021-10-29T10:44:26Z) - Collaborative Visual Navigation [69.20264563368762]
We propose a large-scale 3D dataset, CollaVN, for multi-agent visual navigation (MAVN)
Diverse MAVN variants are explored to make our problem more general.
A memory-augmented communication framework is proposed. Each agent is equipped with a private, external memory to persistently store communication information.
arXiv Detail & Related papers (2021-07-02T15:48:16Z) - FairMOT: On the Fairness of Detection and Re-Identification in Multiple
Object Tracking [92.48078680697311]
Multi-object tracking (MOT) is an important problem in computer vision.
We present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet.
The approach achieves high accuracy for both detection and tracking.
arXiv Detail & Related papers (2020-04-04T08:18:00Z) - A Visual Communication Map for Multi-Agent Deep Reinforcement Learning [7.003240657279981]
Multi-agent learning poses significant challenges in the effort to allocate a concealed communication medium.
Recent studies typically combine a specialized neural network with reinforcement learning to enable communication between agents.
This paper proposes a more scalable approach that not only deals with a great number of agents but also enables collaboration between dissimilar functional agents.
arXiv Detail & Related papers (2020-02-27T02:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.