NRST: Non-rigid Surface Tracking from Monocular Video
- URL: http://arxiv.org/abs/2107.02407v1
- Date: Tue, 6 Jul 2021 06:06:45 GMT
- Title: NRST: Non-rigid Surface Tracking from Monocular Video
- Authors: Marc Habermann, Weipeng Xu, Helge Rhodin, Michael Zollhoefer, Gerard
Pons-Moll, Christian Theobalt
- Abstract summary: We propose an efficient method for non-rigid surface tracking from monocular RGB videos.
Given a video and a template mesh, our algorithm sequentially registers the template non-rigidly to each frame.
Results demonstrate the effectiveness of our method on both general textured non-rigid objects and monochromatic fabrics.
- Score: 97.2743051142748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an efficient method for non-rigid surface tracking from monocular
RGB videos. Given a video and a template mesh, our algorithm sequentially
registers the template non-rigidly to each frame. We formulate the per-frame
registration as an optimization problem that includes a novel texture term
specifically tailored towards tracking objects with uniform texture but
fine-scale structure, such as the regular micro-structural patterns of fabric.
Our texture term exploits the orientation information in the micro-structures
of the objects, e.g., the yarn patterns of fabrics. This enables us to
accurately track uniformly colored materials that have these high frequency
micro-structures, for which traditional photometric terms are usually less
effective. The results demonstrate the effectiveness of our method on both
general textured non-rigid objects and monochromatic fabrics.
Related papers
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models [54.35214051961381]
3D meshes are widely used in computer vision and graphics for their efficiency in animation and minimal memory use in movies, games, AR, and VR.
However, creating temporal consistent and realistic textures for mesh remains labor-intensive for professional artists.
We present 3D Tex sequences that integrates inherent geometry from mesh sequences with video diffusion models to produce consistent textures.
arXiv Detail & Related papers (2024-10-14T17:59:59Z) - LEMON: Localized Editing with Mesh Optimization and Neural Shaders [0.5499187928849248]
We propose LEMON, a mesh editing pipeline that combines neural deferred shading with localized mesh optimization.
We evaluate our pipeline using the DTU dataset, demonstrating that it generates finely-edited meshes more rapidly than the current state-of-the-art methods.
arXiv Detail & Related papers (2024-09-18T14:34:06Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion
Priors [49.03627933561738]
SceneTex is a novel method for generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.
SceneTex enables various and accurate texture synthesis for 3D-FRONT scenes, demonstrating significant improvements in visual quality and prompt fidelity over the prior texture generation methods.
arXiv Detail & Related papers (2023-11-28T22:49:57Z) - Generative Escher Meshes [14.29301974658956]
This paper proposes a fully-automatic, text-guided generative method for producing perfectly-repeating, periodic, tile-able 2D imagery.
In contrast to square texture images that are seamless when tiled, our method generates non-square tilings.
We show our method is able to produce plausible, appealing results, with non-trivial tiles, for a variety of different periodic patterns.
arXiv Detail & Related papers (2023-09-25T22:24:02Z) - N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks [69.94313958962165]
We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction.
We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space.
Our approach can handle complex cloth meshes with up to $100$K triangles and scenes with various objects corresponding to SMPL humans, Non-SMPL humans, or rigid bodies.
arXiv Detail & Related papers (2021-12-13T03:13:11Z) - NeuMIP: Multi-Resolution Neural Materials [98.83749495351627]
NeuMIP is a neural method for representing and rendering a variety of material appearances at different scales.
We generalize traditional mipmap pyramids to pyramids of neural textures, combined with a fully connected network.
We also introduce neural offsets, a novel method which allows rendering materials with intricate parallax effects without any tessellation.
arXiv Detail & Related papers (2021-04-06T21:22:22Z) - Efficient texture mapping via a non-iterative global texture alignment [0.0]
We present a non-iterative method for seamless texture reconstruction of a given 3D scene.
Our method finds the best texture alignment in a single shot using a global optimisation framework.
Experimental results demonstrate low computational complexity and outperformance compared to other alignment methods.
arXiv Detail & Related papers (2020-11-02T10:24:19Z) - MonoClothCap: Towards Temporally Coherent Clothing Capture from
Monocular RGB Video [10.679773937444445]
We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input.
We build statistical deformation models for three types of clothing: T-shirt, short pants and long pants.
Our method produces temporally coherent reconstruction of body and clothing from monocular video.
arXiv Detail & Related papers (2020-09-22T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.