Consistent Mesh Colors for Multi-View Reconstructed 3D Scenes
- URL: http://arxiv.org/abs/2101.10734v1
- Date: Tue, 26 Jan 2021 11:59:23 GMT
- Title: Consistent Mesh Colors for Multi-View Reconstructed 3D Scenes
- Authors: Mohamed Dahy Elkhouly, Alessio Del Bue, Stuart James
- Abstract summary: We find that the method for aggregation of multiple views is crucial for creating consistent texture maps without color calibration.
We compute a color prior from the cross-correlation of view faces and the faces view to identify an optimal per-face color.
- Score: 13.531166759820854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the issue of creating consistent mesh texture maps captured from
scenes without color calibration. We find that the method for aggregation of
the multiple views is crucial for creating spatially consistent meshes without
the need to explicitly optimize for spatial consistency. We compute a color
prior from the cross-correlation of observable view faces and the faces per
view to identify an optimal per-face color. We then use this color in a
re-weighting ratio for the best-view texture, which is identified by prior mesh
texturing work, to create a spatial consistent texture map. Despite our method
not explicitly handling spatial consistency, our results show qualitatively
more consistent results than other state-of-the-art techniques while being
computationally more efficient. We evaluate on prior datasets and additionally
Matterport3D showing qualitative improvements.
Related papers
- GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - Consistent Mesh Diffusion [8.318075237885857]
Given a 3D mesh with a UV parameterization, we introduce a novel approach to generating textures from text prompts.
We demonstrate our approach on a dataset containing 30 meshes, taking approximately 5 minutes per mesh.
arXiv Detail & Related papers (2023-12-01T23:25:14Z) - SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion
Priors [49.03627933561738]
SceneTex is a novel method for generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.
SceneTex enables various and accurate texture synthesis for 3D-FRONT scenes, demonstrating significant improvements in visual quality and prompt fidelity over the prior texture generation methods.
arXiv Detail & Related papers (2023-11-28T22:49:57Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Large-scale and Efficient Texture Mapping Algorithm via Loopy Belief
Propagation [4.742825811314168]
A texture mapping algorithm must be able to efficiently select views, fuse and map textures from these views to mesh models.
Existing approaches achieve efficiency either by limiting the number of images to one view per face, or simplifying global inferences to only achieve local color consistency.
This paper proposes a novel and efficient texture mapping framework that allows the use of multiple views of texture per face.
arXiv Detail & Related papers (2023-05-08T15:11:28Z) - Explicit Correspondence Matching for Generalizable Neural Radiance
Fields [49.49773108695526]
We present a new NeRF method that is able to generalize to new unseen scenarios and perform novel view synthesis with as few as two source views.
The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views.
Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density.
arXiv Detail & Related papers (2023-04-24T17:46:01Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - MeshLoc: Mesh-Based Visual Localization [54.731309449883284]
We explore a more flexible alternative based on dense 3D meshes that does not require features matching between database images to build the scene representation.
Surprisingly competitive results can be obtained when extracting features on renderings of these meshes, without any neural rendering stage.
Our results show that dense 3D model-based representations are a promising alternative to existing representations and point to interesting and challenging directions for future research.
arXiv Detail & Related papers (2022-07-21T21:21:10Z) - Efficient texture mapping via a non-iterative global texture alignment [0.0]
We present a non-iterative method for seamless texture reconstruction of a given 3D scene.
Our method finds the best texture alignment in a single shot using a global optimisation framework.
Experimental results demonstrate low computational complexity and outperformance compared to other alignment methods.
arXiv Detail & Related papers (2020-11-02T10:24:19Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.