TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D
Video
- URL: http://arxiv.org/abs/2008.00158v3
- Date: Mon, 21 Sep 2020 03:14:17 GMT
- Title: TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D
Video
- Authors: Tiancheng Zhi, Christoph Lassner, Tony Tung, Carsten Stoll, Srinivasa
G. Narasimhan and Minh Vo
- Abstract summary: TexMesh is a novel approach to reconstruct detailed human meshes with high-resolution full-body texture from RGB-D video.
In practice, we train our models on a short example sequence for self-adaptation and the model runs at interactive framerate afterwards.
- Score: 37.33902000401107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present TexMesh, a novel approach to reconstruct detailed human meshes
with high-resolution full-body texture from RGB-D video. TexMesh enables high
quality free-viewpoint rendering of humans. Given the RGB frames, the captured
environment map, and the coarse per-frame human mesh from RGB-D tracking, our
method reconstructs spatiotemporally consistent and detailed per-frame meshes
along with a high-resolution albedo texture. By using the incident illumination
we are able to accurately estimate local surface geometry and albedo, which
allows us to further use photometric constraints to adapt a synthetically
trained model to real-world sequences in a self-supervised manner for detailed
surface geometry and high-resolution texture estimation. In practice, we train
our models on a short example sequence for self-adaptation and the model runs
at interactive framerate afterwards. We validate TexMesh on synthetic and
real-world data, and show it outperforms the state of art quantitatively and
qualitatively.
Related papers
- Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing [79.10630153776759]
3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.
We propose a novel approach, namely Texture-GS, to disentangle the appearance from the geometry by representing it as a 2D texture mapped onto the 3D surface.
Our method not only facilitates high-fidelity appearance editing but also achieves real-time rendering on consumer-level devices.
arXiv Detail & Related papers (2024-03-15T06:42:55Z) - SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion
Priors [49.03627933561738]
SceneTex is a novel method for generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.
SceneTex enables various and accurate texture synthesis for 3D-FRONT scenes, demonstrating significant improvements in visual quality and prompt fidelity over the prior texture generation methods.
arXiv Detail & Related papers (2023-11-28T22:49:57Z) - Mesh2Tex: Generating Mesh Textures from Image Queries [45.32242590651395]
In particular, textured stage textures from images of real objects match real images observations.
We present Mesh2Tex, which learns object geometry from uncorrelated collections of 3D object geometry.
arXiv Detail & Related papers (2023-04-12T13:58:25Z) - CrossHuman: Learning Cross-Guidance from Multi-Frame Images for Human
Reconstruction [6.450579406495884]
CrossHuman is a novel method that learns cross-guidance from parametric human model and multi-frame RGB images.
We design a reconstruction pipeline combined with tracking-based methods and tracking-free methods.
Compared with previous works, our CrossHuman enables high-fidelity geometry details and texture in both visible and invisible regions.
arXiv Detail & Related papers (2022-07-20T08:25:20Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - GeoNeRF: Generalizing NeRF with Geometry Priors [2.578242050187029]
We present GeoNeRF, a generalizable photorealistic novel view method based on neural radiance fields.
Our approach consists of two main stages: a geometry reasoner and a synthesis.
Experiments show that GeoNeRF outperforms state-of-the-art generalizable neural rendering models on various synthetic and real datasets.
arXiv Detail & Related papers (2021-11-26T15:15:37Z) - Dynamic Object Removal and Spatio-Temporal RGB-D Inpainting via
Geometry-Aware Adversarial Learning [9.150245363036165]
Dynamic objects have a significant impact on the robot's perception of the environment.
In this work, we address this problem by synthesizing plausible color, texture and geometry in regions occluded by dynamic objects.
We optimize our architecture using adversarial training to synthesize fine realistic textures which enables it to hallucinate color and depth structure in occluded regions online.
arXiv Detail & Related papers (2020-08-12T01:23:21Z) - SparseFusion: Dynamic Human Avatar Modeling from Sparse RGBD Images [49.52782544649703]
We propose a novel approach to reconstruct 3D human body shapes based on a sparse set of RGBD frames.
The main challenge is how to robustly fuse these sparse frames into a canonical 3D model.
Our framework is flexible, with potential applications going beyond shape reconstruction.
arXiv Detail & Related papers (2020-06-05T18:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.