See What the Robot Can't See: Learning Cooperative Perception for Visual
Navigation
- URL: http://arxiv.org/abs/2208.00759v5
- Date: Mon, 31 Jul 2023 16:40:56 GMT
- Title: See What the Robot Can't See: Learning Cooperative Perception for Visual
Navigation
- Authors: Jan Blumenkamp and Qingbiao Li and Binyu Wang and Zhe Liu and Amanda
Prorok
- Abstract summary: We train the sensors to encode and communicate relevant viewpoint information to the mobile robot.
We overcome the challenge of enabling all the sensors to predict the direction along the shortest path to the target.
Our results show that by using communication between the sensors and the robot, we achieve up to 2.0x improvement in SPL.
- Score: 11.943412856714154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of navigating a mobile robot towards a target in an
unknown environment that is endowed with visual sensors, where neither the
robot nor the sensors have access to global positioning information and only
use first-person-view images. In order to overcome the need for positioning, we
train the sensors to encode and communicate relevant viewpoint information to
the mobile robot, whose objective it is to use this information to navigate to
the target along the shortest path. We overcome the challenge of enabling all
the sensors (even those that cannot directly see the target) to predict the
direction along the shortest path to the target by implementing a
neighborhood-based feature aggregation module using a Graph Neural Network
(GNN) architecture. In our experiments, we first demonstrate generalizability
to previously unseen environments with various sensor layouts. Our results show
that by using communication between the sensors and the robot, we achieve up to
2.0x improvement in SPL (Success weighted by Path Length) when compared to a
communication-free baseline. This is done without requiring a global map,
positioning data, nor pre-calibration of the sensor network. Second, we perform
a zero-shot transfer of our model from simulation to the real world. Laboratory
experiments demonstrate the feasibility of our approach in various cluttered
environments. Finally, we showcase examples of successful navigation to the
target while both the sensor network layout as well as obstacles are
dynamically reconfigured as the robot navigates. We provide a video demo, the
dataset, trained models, and source code.
https://www.youtube.com/watch?v=kcmr6RUgucw
https://github.com/proroklab/sensor-guided-visual-nav
Related papers
- Polaris: Open-ended Interactive Robotic Manipulation via Syn2Real Visual Grounding and Large Language Models [53.22792173053473]
We introduce an interactive robotic manipulation framework called Polaris.
Polaris integrates perception and interaction by utilizing GPT-4 alongside grounded vision models.
We propose a novel Synthetic-to-Real (Syn2Real) pose estimation pipeline.
arXiv Detail & Related papers (2024-08-15T06:40:38Z) - Multimodal Anomaly Detection based on Deep Auto-Encoder for Object Slip
Perception of Mobile Manipulation Robots [22.63980025871784]
The proposed framework integrates heterogeneous data streams collected from various robot sensors, including RGB and depth cameras, a microphone, and a force-torque sensor.
The integrated data is used to train a deep autoencoder to construct latent representations of the multisensory data that indicate the normal status.
Anomalies can then be identified by error scores measured by the difference between the trained encoder's latent values and the latent values of reconstructed input data.
arXiv Detail & Related papers (2024-03-06T09:15:53Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - HabitatDyn Dataset: Dynamic Object Detection to Kinematics Estimation [16.36110033895749]
We propose the dataset HabitatDyn, which contains both synthetic RGB videos, semantic labels, and depth information, as well as kinetics information.
HabitatDyn was created from the perspective of a mobile robot with a moving camera, and contains 30 scenes featuring six different types of moving objects with varying velocities.
arXiv Detail & Related papers (2023-04-21T09:57:35Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - Few-Shot Visual Grounding for Natural Human-Robot Interaction [0.0]
We propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user.
At the core of our system, we employ a multi-modal deep neural network for visual grounding.
We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets.
arXiv Detail & Related papers (2021-03-17T15:24:02Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - A Few Shot Adaptation of Visual Navigation Skills to New Observations
using Meta-Learning [12.771506155747893]
We introduce a learning algorithm that enables rapid adaptation to new sensor configurations or target objects with a few shots.
Our experiments show that our algorithm adapts the learned navigation policy with only three shots for unseen situations.
arXiv Detail & Related papers (2020-11-06T21:53:52Z) - Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments [0.0]
We present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
The input for the robot is only the fused data from a 2D laser scanner and a RGB-D camera as well as the orientation to the goal.
The output actions of an Asynchronous Advantage Actor-Critic network (GA3C) are the linear and angular velocities for the robot.
arXiv Detail & Related papers (2020-05-28T09:15:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.