Early Recall, Late Precision: Multi-Robot Semantic Object Mapping under
Operational Constraints in Perceptually-Degraded Environments
- URL: http://arxiv.org/abs/2206.10062v1
- Date: Tue, 21 Jun 2022 01:11:42 GMT
- Title: Early Recall, Late Precision: Multi-Robot Semantic Object Mapping under
Operational Constraints in Perceptually-Degraded Environments
- Authors: Xianmei Lei, Taeyeon Kim, Nicolas Marchal, Daniel Pastor, Barry Ridge,
Frederik Sch\"oller, Edward Terry, Fernando Chavez, Thomas Touma, Kyohei Otsu
and Ali Agha
- Abstract summary: We propose the Early Recall, Late Precision (EaRLaP) semantic object mapping pipeline to solve this problem.
EaRLaP was used by Team CoSTAR in DARPA Subterranean Challenge, where it successfully detected all the artifacts encountered by the team of robots.
We will discuss these results and performance of the EaRLaP on various datasets.
- Score: 47.917640567924174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic object mapping in uncertain, perceptually degraded environments
during long-range multi-robot autonomous exploration tasks such as
search-and-rescue is important and challenging. During such missions, high
recall is desirable to avoid missing true target objects and high precision is
also critical to avoid wasting valuable operational time on false positives.
Given recent advancements in visual perception algorithms, the former is
largely solvable autonomously, but the latter is difficult to address without
the supervision of a human operator. However, operational constraints such as
mission time, computational requirements, mesh network bandwidth and so on, can
make the operator's task infeasible unless properly managed. We propose the
Early Recall, Late Precision (EaRLaP) semantic object mapping pipeline to solve
this problem. EaRLaP was used by Team CoSTAR in DARPA Subterranean Challenge,
where it successfully detected all the artifacts encountered by the team of
robots. We will discuss these results and performance of the EaRLaP on various
datasets.
Related papers
- DeTra: A Unified Model for Object Detection and Trajectory Forecasting [68.85128937305697]
Our approach formulates the union of the two tasks as a trajectory refinement problem.
To tackle this unified task, we design a refinement transformer that infers the presence, pose, and multi-modal future behaviors of objects.
In our experiments, we observe that ourmodel outperforms the state-of-the-art on Argoverse 2 Sensor and Open dataset.
arXiv Detail & Related papers (2024-06-06T18:12:04Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - LiDAR-guided object search and detection in Subterranean Environments [12.265807098187297]
This work utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances.
The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.
arXiv Detail & Related papers (2022-10-26T19:38:19Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Domain Adaptation for Outdoor Robot Traversability Estimation from RGB
data with Safety-Preserving Loss [12.697106921197701]
We present an approach based on deep learning to estimate and anticipate the traversing score of different routes in the field of view of an on-board RGB camera.
We then enhance the model's capabilities by addressing domain shifts through gradient-reversal unsupervised adaptation.
Experimental results show that our approach is able to satisfactorily identify traversable areas and to generalize to unseen locations.
arXiv Detail & Related papers (2020-09-16T09:19:33Z) - Planning to Explore via Self-Supervised World Models [120.31359262226758]
Plan2Explore is a self-supervised reinforcement learning agent.
We present a new approach to self-supervised exploration and fast adaptation to new tasks.
Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods.
arXiv Detail & Related papers (2020-05-12T17:59:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.