EigenFairing: 3D Model Fairing using Image Coherence
- URL: http://arxiv.org/abs/2206.05309v1
- Date: Fri, 10 Jun 2022 18:13:19 GMT
- Title: EigenFairing: 3D Model Fairing using Image Coherence
- Authors: Pragyana Mishra and Omead Amidi and Takeo Kanade
- Abstract summary: A surface is often modeled as a triangulated mesh of 3D points and textures associated with faces of the mesh.
When the points do not lie at critical points of maximum curvature or discontinuities of the real surface, faces of the mesh do not lie close to the modeled surface.
This paper presents a technique for perfecting the 3D surface model by repositioning its vertices so that it is coherent with a set of observed images of the object.
- Score: 0.884755712094096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A surface is often modeled as a triangulated mesh of 3D points and textures
associated with faces of the mesh. The 3D points could be either sampled from
range data or derived from a set of images using a stereo or
Structure-from-Motion algorithm. When the points do not lie at critical points
of maximum curvature or discontinuities of the real surface, faces of the mesh
do not lie close to the modeled surface. This results in textural artifacts,
and the model is not perfectly coherent with a set of actual images -- the ones
that are used to texture-map its mesh. This paper presents a technique for
perfecting the 3D surface model by repositioning its vertices so that it is
coherent with a set of observed images of the object. The textural artifacts
and incoherence with images are due to the non-planarity of a surface patch
being approximated by a planar face, as observed from multiple viewpoints.
Image areas from the viewpoints are used to represent texture for the patch in
Eigenspace. The Eigenspace representation captures variations of texture, which
we seek to minimize. A coherence measure based on the difference between the
face textures reconstructed from Eigenspace and the actual images is used to
reposition the vertices so that the model is improved or faired. We refer to
this technique of model refinement as EigenFairing, by which the model is
faired, both geometrically and texturally, to better approximate the real
surface.
Related papers
- 3D-GANTex: 3D Face Reconstruction with StyleGAN3-based Multi-View Images and 3DDFA based Mesh Generation [0.8479659578608233]
This paper introduces a novel method for texture estimation from a single image by first using StyleGAN and 3D Morphable Models.
The result shows that the generated mesh is of high quality with near to accurate texture representation.
arXiv Detail & Related papers (2024-10-21T13:42:06Z) - DreamMesh: Jointly Manipulating and Texturing Triangle Meshes for Text-to-3D Generation [149.77077125310805]
We present DreamMesh, a novel text-to-3D architecture that pivots on well-defined surfaces (triangle meshes) to generate high-fidelity explicit 3D model.
In the coarse stage, the mesh is first deformed by text-guided Jacobians and then DreamMesh textures the mesh with an interlaced use of 2D diffusion models.
In the fine stage, DreamMesh jointly manipulates the mesh and refines the texture map, leading to high-quality triangle meshes with high-fidelity textured materials.
arXiv Detail & Related papers (2024-09-11T17:59:02Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Topologically Consistent Multi-View Face Inference Using Volumetric
Sampling [25.001398662643986]
ToFu is a geometry inference framework that can produce topologically consistent meshes across identities and expressions.
A novel progressive mesh generation network embeds the topological structure of the face in a feature volume.
These high-quality assets are readily usable by production studios for avatar creation, animation and physically-based skin rendering.
arXiv Detail & Related papers (2021-10-06T17:55:08Z) - Plan2Scene: Converting Floorplans to 3D Scenes [36.34298107648571]
We address the task of converting a floorplan and a set of associated photos of a residence into a textured 3D mesh model.
Our system 1) lifts a floorplan image to a 3D mesh model; 2) synthesizes surface textures based on the input photos; and 3) infers textures for unobserved surfaces using a graph neural network architecture.
arXiv Detail & Related papers (2021-06-09T20:32:20Z) - An Effective Loss Function for Generating 3D Models from Single 2D Image
without Rendering [0.0]
Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction.
Currents use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape.
We propose a novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object's silhouette.
arXiv Detail & Related papers (2021-03-05T00:02:18Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.