Asynchronous Collaborative Localization by Integrating Spatiotemporal
Graph Learning with Model-Based Estimation
- URL: http://arxiv.org/abs/2111.03751v1
- Date: Fri, 5 Nov 2021 22:48:13 GMT
- Title: Asynchronous Collaborative Localization by Integrating Spatiotemporal
Graph Learning with Model-Based Estimation
- Authors: Peng Gao, Brian Reily, Rui Guo, Hongsheng Lu, Qingzhao Zhu and Hao
Zhang
- Abstract summary: Collaborative localization is an essential capability for a team of robots such as connected vehicles to collaboratively estimate object locations.
To enable collaborative localization, four key challenges must be addressed, including modeling complex relationships between observed objects.
We introduce a novel approach that integrates uncertainty-aware graph learning model and state estimation for a team of robots to collaboratively localize objects.
- Score: 22.63837164001751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative localization is an essential capability for a team of robots
such as connected vehicles to collaboratively estimate object locations from
multiple perspectives with reliant cooperation. To enable collaborative
localization, four key challenges must be addressed, including modeling complex
relationships between observed objects, fusing observations from an arbitrary
number of collaborating robots, quantifying localization uncertainty, and
addressing latency of robot communications. In this paper, we introduce a novel
approach that integrates uncertainty-aware spatiotemporal graph learning and
model-based state estimation for a team of robots to collaboratively localize
objects. Specifically, we introduce a new uncertainty-aware graph learning
model that learns spatiotemporal graphs to represent historical motions of the
objects observed by each robot over time and provides uncertainties in object
localization. Moreover, we propose a novel method for integrated learning and
model-based state estimation, which fuses asynchronous observations obtained
from an arbitrary number of robots for collaborative localization. We evaluate
our approach in two collaborative object localization scenarios in simulations
and on real robots. Experimental results show that our approach outperforms
previous methods and achieves state-of-the-art performance on asynchronous
collaborative localization.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Robust Collaborative Perception without External Localization and Clock Devices [52.32342059286222]
A consistent spatial-temporal coordination across multiple agents is fundamental for collaborative perception.
Traditional methods depend on external devices to provide localization and clock signals.
We propose a novel approach: aligning by recognizing the inherent geometric patterns within the perceptual data of various agents.
arXiv Detail & Related papers (2024-05-05T15:20:36Z) - Structured Cooperative Learning with Graphical Model Priors [98.53322192624594]
We study how to train personalized models for different tasks on decentralized devices with limited local data.
We propose "Structured Cooperative Learning (SCooL)", in which a cooperation graph across devices is generated by a graphical model.
We evaluate SCooL and compare it with existing decentralized learning methods on an extensive set of benchmarks.
arXiv Detail & Related papers (2023-06-16T02:41:31Z) - SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World
Semantic Scene Understanding [34.19666841489646]
We show how a robot can autonomously discover novel semantic classes and improve accuracy on known classes when exploring an unknown environment.
We develop a general framework for mapping and clustering that we then use to generate a self-supervised learning signal to update a semantic segmentation model.
In particular, we show how clustering parameters can be optimized during deployment and that fusion of multiple observation modalities improves novel object discovery compared to prior work.
arXiv Detail & Related papers (2022-06-21T18:41:51Z) - Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition [111.87412719773889]
We propose a joint learning framework for "interacted object localization" and "human action recognition" based on skeleton data.
Our method achieves the best or competitive performance with the state-of-the-art methods for human action recognition.
arXiv Detail & Related papers (2021-10-28T10:09:34Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Generating Annotated Training Data for 6D Object Pose Estimation in
Operational Environments with Minimal User Interaction [1.0044401320520304]
We present a proof of concept for a novel approach of autonomously generating annotated training data for 6D object pose estimation.
This approach is designed for learning new objects in operational environments while requiring little interaction and no expertise on the part of the user.
arXiv Detail & Related papers (2021-03-17T14:46:21Z) - Structured Prediction for CRiSP Inverse Kinematics Learning with
Misspecified Robot Models [39.513301957826435]
We introduce a structured prediction algorithm that combines a data-driven strategy with a forward kinematics function.
The proposed approach ensures that predicted joint configurations are well within the robot's constraints.
arXiv Detail & Related papers (2021-02-25T15:39:33Z) - Multi-view Sensor Fusion by Integrating Model-based Estimation and Graph
Learning for Collaborative Object Localization [22.57544305097723]
Collaborative object localization aims to collaboratively estimate locations of objects observed from multiple views or perspectives.
To enable collaborative localization, several model-based state estimation and learning-based localization methods have been developed.
We introduce a novel graph filter approach that integrates graph learning and model-based estimation to perform multi-view sensor fusion.
arXiv Detail & Related papers (2020-11-16T03:33:28Z) - Robust Unsupervised Learning of Temporal Dynamic Interactions [21.928675010305543]
In this paper we introduce a model-free metric based on the Procrustes distance for robust representation learning of interactions.
We also introduce an optimal transport based distance metric for comparing between distributions of interaction primitives.
Their usefulness will be demonstrated in unsupervised learning of vehicle-to-vechicle interactions extracted from the Safety Pilot database.
arXiv Detail & Related papers (2020-06-18T02:39:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.