Large-scale and Efficient Texture Mapping Algorithm via Loopy Belief
Propagation
- URL: http://arxiv.org/abs/2305.04763v1
- Date: Mon, 8 May 2023 15:11:28 GMT
- Title: Large-scale and Efficient Texture Mapping Algorithm via Loopy Belief
Propagation
- Authors: Xiao ling, Rongjun Qin
- Abstract summary: A texture mapping algorithm must be able to efficiently select views, fuse and map textures from these views to mesh models.
Existing approaches achieve efficiency either by limiting the number of images to one view per face, or simplifying global inferences to only achieve local color consistency.
This paper proposes a novel and efficient texture mapping framework that allows the use of multiple views of texture per face.
- Score: 4.742825811314168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Texture mapping as a fundamental task in 3D modeling has been well
established for well-acquired aerial assets under consistent illumination, yet
it remains a challenge when it is scaled to large datasets with images under
varying views and illuminations. A well-performed texture mapping algorithm
must be able to efficiently select views, fuse and map textures from these
views to mesh models, at the same time, achieve consistent radiometry over the
entire model. Existing approaches achieve efficiency either by limiting the
number of images to one view per face, or simplifying global inferences to only
achieve local color consistency. In this paper, we break this tie by proposing
a novel and efficient texture mapping framework that allows the use of multiple
views of texture per face, at the same time to achieve global color
consistency. The proposed method leverages a loopy belief propagation algorithm
to perform an efficient and global-level probabilistic inferences to rank
candidate views per face, which enables face-level multi-view texture fusion
and blending. The texture fusion algorithm, being non-parametric, brings
another advantage over typical parametric post color correction methods, due to
its improved robustness to non-linear illumination differences. The experiments
on three different types of datasets (i.e. satellite dataset, unmanned-aerial
vehicle dataset and close-range dataset) show that the proposed method has
produced visually pleasant and texturally consistent results in all scenarios,
with an added advantage of consuming less running time as compared to the state
of the art methods, especially for large-scale dataset such as
satellite-derived models.
Related papers
- Consistent Mesh Diffusion [8.318075237885857]
Given a 3D mesh with a UV parameterization, we introduce a novel approach to generating textures from text prompts.
We demonstrate our approach on a dataset containing 30 meshes, taking approximately 5 minutes per mesh.
arXiv Detail & Related papers (2023-12-01T23:25:14Z) - Diff-DOPE: Differentiable Deep Object Pose Estimation [29.703385848843414]
We introduce Diff-DOPE, a 6-DoF pose refiner that takes as input an image, a 3D textured model of an object, and an initial pose of the object.
The method uses differentiable rendering to update the object pose to minimize the visual error between the image and the projection of the model.
We show that this simple, yet effective, idea is able to achieve state-of-the-art results on pose estimation datasets.
arXiv Detail & Related papers (2023-09-30T18:52:57Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Multiscale Analysis for Improving Texture Classification [62.226224120400026]
This paper employs the Gaussian-Laplacian pyramid to treat different spatial frequency bands of a texture separately.
We aggregate features extracted from gray and color texture images using bio-inspired texture descriptors, information-theoretic measures, gray-level co-occurrence matrix features, and Haralick statistical features into a single feature vector.
arXiv Detail & Related papers (2022-04-21T01:32:22Z) - Large-Scale 3D Semantic Reconstruction for Automated Driving Vehicles
with Adaptive Truncated Signed Distance Function [9.414880946870916]
We propose a novel 3D reconstruction and semantic mapping system using LiDAR and camera sensors.
An Adaptive Truncated Function is introduced to describe surfaces implicitly, which can deal with different LiDAR point sparsities.
An optimal image patch selection strategy is proposed to estimate the optimal semantic class for each triangle mesh.
arXiv Detail & Related papers (2022-02-28T15:11:25Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Efficient and Differentiable Shadow Computation for Inverse Problems [64.70468076488419]
Differentiable geometric computation has received increasing interest for image-based inverse problems.
We propose an efficient yet efficient approach for differentiable visibility and soft shadow computation.
As our formulation is differentiable, it can be used to solve inverse problems such as texture, illumination, rigid pose, and deformation recovery from images.
arXiv Detail & Related papers (2021-04-01T09:29:05Z) - Consistent Mesh Colors for Multi-View Reconstructed 3D Scenes [13.531166759820854]
We find that the method for aggregation of multiple views is crucial for creating consistent texture maps without color calibration.
We compute a color prior from the cross-correlation of view faces and the faces view to identify an optimal per-face color.
arXiv Detail & Related papers (2021-01-26T11:59:23Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.