Online Localisation and Colored Mesh Reconstruction Architecture for 3D
Visual Feedback in Robotic Exploration Missions
- URL: http://arxiv.org/abs/2207.10489v1
- Date: Thu, 21 Jul 2022 14:09:43 GMT
- Title: Online Localisation and Colored Mesh Reconstruction Architecture for 3D
Visual Feedback in Robotic Exploration Missions
- Authors: Quentin Serdel, Christophe Grand, Julien Marzat and Julien Moras
- Abstract summary: This paper introduces an Online Localisation and Colored Mesh Reconstruction (OLCMR) ROS perception architecture for ground exploration robots.
It is intended to be used by a remote human operator to easily visualise the mapped environment during or after the mission.
- Score: 2.8213955186000512
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper introduces an Online Localisation and Colored Mesh Reconstruction
(OLCMR) ROS perception architecture for ground exploration robots aiming to
perform robust Simultaneous Localisation And Mapping (SLAM) in challenging
unknown environments and provide an associated colored 3D mesh representation
in real time. It is intended to be used by a remote human operator to easily
visualise the mapped environment during or after the mission or as a
development base for further researches in the field of exploration robotics.
The architecture is mainly composed of carefully-selected open-source ROS
implementations of a LiDAR-based SLAM algorithm alongside a colored surface
reconstruction procedure using a point cloud and RGB camera images projected
into the 3D space. The overall performances are evaluated on the Newer College
handheld LiDAR-Vision reference dataset and on two experimental trajectories
gathered on board of representative wheeled robots in respectively urban and
countryside outdoor environments. Index Terms: Field Robots, Mapping, SLAM,
Colored Surface Reconstruction
Related papers
- Advances in Feed-Forward 3D Reconstruction and View Synthesis: A Survey [154.50661618628433]
3D reconstruction and view synthesis are foundational problems in computer vision, graphics, and immersive technologies such as augmented reality (AR), virtual reality (VR), and digital twins.<n>Recent advances in feed-forward approaches, driven by deep learning, have revolutionized this field by enabling fast and generalizable 3D reconstruction and view synthesis.
arXiv Detail & Related papers (2025-07-19T06:13:25Z) - Render and Diffuse: Aligning Image and Action Spaces for Diffusion-based Behaviour Cloning [15.266994159289645]
We introduce Render and Diffuse (R&D) a method that unifies low-level robot actions and RGB observations within the image space using virtual renders of the 3D model of the robot.
This space unification simplifies the learning problem and introduces inductive biases that are crucial for sample efficiency and spatial generalisation.
Our results show that R&D exhibits strong spatial generalisation capabilities and is more sample efficient than more common image-to-action methods.
arXiv Detail & Related papers (2024-05-28T14:06:10Z) - NSLF-OL: Online Learning of Neural Surface Light Fields alongside
Real-time Incremental 3D Reconstruction [0.76146285961466]
The paper proposes a novel Neural Surface Light Fields model that copes with the small range of view directions while producing a good result in unseen directions.
Our model learns online the Neural Surface Light Fields (NSLF) aside from real-time 3D reconstruction with a sequential data stream as the shared input.
In addition to online training, our model also provides real-time rendering after completing the data stream for visualization.
arXiv Detail & Related papers (2023-04-29T15:41:15Z) - Object Goal Navigation Based on Semantics and RGB Ego View [9.702784248870522]
This paper presents an architecture and methodology to empower a service robot to navigate an indoor environment with semantic decision making, given RGB ego view.
The robot navigates based on GeoSem map - a relational combination of geometric and semantic map.
The presented approach was found to outperform human users in gamified evaluations with respect to average completion time.
arXiv Detail & Related papers (2022-10-20T19:23:08Z) - Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from
Depth Maps [66.24554680709417]
Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications.
We propose a non-invasive framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera.
arXiv Detail & Related papers (2022-07-06T08:52:12Z) - Beyond Visual Field of View: Perceiving 3D Environment with Echoes and
Vision [51.385731364529306]
This paper focuses on perceiving and navigating 3D environments using echoes and RGB image.
In particular, we perform depth estimation by fusing RGB image with echoes, received from multiple orientations.
We show that the echoes provide holistic and in-expensive information about the 3D structures complementing the RGB image.
arXiv Detail & Related papers (2022-07-03T22:31:47Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Reconstructing Interactive 3D Scenes by Panoptic Mapping and CAD Model
Alignments [81.38641691636847]
We rethink the problem of scene reconstruction from an embodied agent's perspective.
We reconstruct an interactive scene using RGB-D data stream.
This reconstructed scene replaces the object meshes in the dense panoptic map with part-based articulated CAD models.
arXiv Detail & Related papers (2021-03-30T05:56:58Z) - SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans [34.397726189729994]
SPSG is a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations.
Our self-supervised approach learns to jointly inpaint geometry and color by correlating an incomplete RGB-D scan with a more complete version of that scan.
arXiv Detail & Related papers (2020-06-25T18:58:23Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.