Real Time Incremental Foveal Texture Mapping for Autonomous Vehicles
- URL: http://arxiv.org/abs/2101.06393v1
- Date: Sat, 16 Jan 2021 07:41:24 GMT
- Title: Real Time Incremental Foveal Texture Mapping for Autonomous Vehicles
- Authors: Ashish Kumar, James R. McBride, Gaurav Pandey
- Abstract summary: The generated detailed map serves as a virtual test bed for various vision and planning algorithms.
It can also serve as a background map for various vision and planning algorithms.
- Score: 11.702817783491616
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose an end-to-end real time framework to generate high resolution
graphics grade textured 3D map of urban environment. The generated detailed map
finds its application in the precise localization and navigation of autonomous
vehicles. It can also serve as a virtual test bed for various vision and
planning algorithms as well as a background map in the computer games. In this
paper, we focus on two important issues: (i) incrementally generating a map
with coherent 3D surface, in real time and (ii) preserving the quality of color
texture. To handle the above issues, firstly, we perform a pose-refinement
procedure which leverages camera image information, Delaunay triangulation and
existing scan matching techniques to produce high resolution 3D map from the
sparse input LIDAR scan. This 3D map is then texturized and accumulated by
using a novel technique of ray-filtering which handles occlusion and
inconsistencies in pose-refinement. Further, inspired by human fovea, we
introduce foveal-processing which significantly reduces the computation time
and also assists ray-filtering to maintain consistency in color texture and
coherency in 3D surface of the output map. Moreover, we also introduce texture
error (TE) and mean texture mapping error (MTME), which provides quantitative
measure of texturing and overall quality of the textured maps.
Related papers
- Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing [79.10630153776759]
3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.
We propose a novel approach, namely Texture-GS, to disentangle the appearance from the geometry by representing it as a 2D texture mapped onto the 3D surface.
Our method not only facilitates high-fidelity appearance editing but also achieves real-time rendering on consumer-level devices.
arXiv Detail & Related papers (2024-03-15T06:42:55Z) - Neural Rendering based Urban Scene Reconstruction for Autonomous Driving [8.007494499012624]
We propose a multimodal 3D scene reconstruction using a framework combining neural implicit surfaces and radiance fields.
Dense 3D reconstruction has many applications in automated driving including automated annotation validation.
We demonstrate qualitative and quantitative results on challenging automotive scenes.
arXiv Detail & Related papers (2024-02-09T23:20:23Z) - GeoScaler: Geometry and Rendering-Aware Downsampling of 3D Mesh Textures [0.06990493129893112]
High-resolution texture maps are necessary for representing real-world objects accurately with 3D meshes.
GeoScaler is a method of downsampling texture maps of 3D meshes while incorporating geometric cues.
We show that the textures generated by GeoScaler deliver significantly better quality rendered images compared to those generated by traditional downsampling methods.
arXiv Detail & Related papers (2023-11-28T07:55:25Z) - Directional Texture Editing for 3D Models [51.31499400557996]
ITEM3D is designed for automatic textbf3D object editing according to the text textbfInstructions.
Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge of text and 3D representation.
arXiv Detail & Related papers (2023-09-26T12:01:13Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Large-Scale 3D Semantic Reconstruction for Automated Driving Vehicles
with Adaptive Truncated Signed Distance Function [9.414880946870916]
We propose a novel 3D reconstruction and semantic mapping system using LiDAR and camera sensors.
An Adaptive Truncated Function is introduced to describe surfaces implicitly, which can deal with different LiDAR point sparsities.
An optimal image patch selection strategy is proposed to estimate the optimal semantic class for each triangle mesh.
arXiv Detail & Related papers (2022-02-28T15:11:25Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - Deep Hybrid Self-Prior for Full 3D Mesh Generation [57.78562932397173]
We propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality.
In particular, we first generate an initial mesh using a 3D convolutional neural network with 3D self-prior, and then encode both 3D information and color information in the 2D UV atlas.
Our method recovers the 3D textured mesh model of high quality from sparse input, and outperforms the state-of-the-art methods in terms of both the geometry and texture quality.
arXiv Detail & Related papers (2021-08-18T07:44:21Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z) - Vehicle Reconstruction and Texture Estimation Using Deep Implicit
Semantic Template Mapping [32.580904361799966]
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input.
By fusing the global and local features together, our approach is capable to generate consistent and detailed texture in both visible and invisible areas.
arXiv Detail & Related papers (2020-11-30T09:27:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.