A Collaborative Visual SLAM Framework for Service Robots
- URL: http://arxiv.org/abs/2102.03228v1
- Date: Fri, 5 Feb 2021 15:19:07 GMT
- Title: A Collaborative Visual SLAM Framework for Service Robots
- Authors: Ming Ouyang, Xuesong Shi, Yujie Wang, Yuxin Tian, Yingzhe Shen, Dawei
Wang, Peng Wang
- Abstract summary: We present a collaborative visual simultaneous localization and mapping (SLAM) framework for service robots.
Each robot can register to an existing map, update the map, or build new maps, all with a unified interface and low computation and memory cost.
A landmark retrieval method is proposed to allow each robot to get nearby landmarks observed by others.
- Score: 14.41737199910213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid deployment of service robots, a method should be established
to allow multiple robots to work in the same place to collaborate and share the
spatial information. To this end, we present a collaborative visual
simultaneous localization and mapping (SLAM) framework particularly designed
for service robot scenarios. With an edge server maintaining a map database and
performing global optimization, each robot can register to an existing map,
update the map, or build new maps, all with a unified interface and low
computation and memory cost. To enable real-time information sharing, an
efficient landmark retrieval method is proposed to allow each robot to get
nearby landmarks observed by others. The framework is general enough to support
both RGB-D and monocular cameras, as well as robots with multiple cameras,
taking the rigid constraints between cameras into consideration. The proposed
framework has been fully implemented and verified with public datasets and live
experiments.
Related papers
- PRISM-TopoMap: Online Topological Mapping with Place Recognition and Scan Matching [42.74395278382559]
This paper introduces PRISM-TopoMap -- a topological mapping method that maintains a graph of locally aligned locations.
The proposed method involves learnable multimodal place recognition paired with the scan matching pipeline for localization and loop closure.
We conduct a broad experimental evaluation of the suggested approach in a range of photo-realistic environments and on a real robot.
arXiv Detail & Related papers (2024-04-02T06:25:16Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Active Visual Localization for Multi-Agent Collaboration: A Data-Driven Approach [47.373245682678515]
This work investigates how active visual localization can be used to overcome challenges of viewpoint changes.
Specifically, we focus on the problem of selecting the optimal viewpoint at a given location.
The result demonstrates the superior performance of the data-driven approach when compared to existing methods.
arXiv Detail & Related papers (2023-10-04T08:18:30Z) - ExAug: Robot-Conditioned Navigation Policies via Geometric Experience
Augmentation [73.63212031963843]
We propose a novel framework, ExAug, to augment the experiences of different robot platforms from multiple datasets in diverse environments.
The trained policy is evaluated on two new robot platforms with three different cameras in indoor and outdoor environments with obstacles.
arXiv Detail & Related papers (2022-10-14T01:32:15Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Edge Robotics: Edge-Computing-Accelerated Multi-Robot Simultaneous
Localization and Mapping [22.77685685539304]
RecSLAM is a multi-robot laser SLAM system that focuses on accelerating map construction process under the robot-edge-cloud architecture.
In contrast to conventional multi-robot SLAM that generates graphic maps on robots and completely merges them on the cloud, RecSLAM develops a hierarchical map fusion technique.
Extensive evaluations show RecSLAM can achieve up to 39% processing latency reduction over the state-of-the-art.
arXiv Detail & Related papers (2021-12-25T10:40:49Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Single-view robot pose and joint angle estimation via render & compare [40.05546237998603]
We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image.
This is an important problem to grant mobile and itinerant autonomous systems the ability to interact with other robots.
arXiv Detail & Related papers (2021-04-19T14:48:29Z) - Kimera-Multi: a System for Distributed Multi-Robot Metric-Semantic
Simultaneous Localization and Mapping [57.173793973480656]
We present the first fully distributed multi-robot system for dense metric-semantic SLAM.
Our system, dubbed Kimera-Multi, is implemented by a team of robots equipped with visual-inertial sensors.
Kimera-Multi builds a 3D mesh model of the environment in real-time, where each face of the mesh is annotated with a semantic label.
arXiv Detail & Related papers (2020-11-08T21:38:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.