Neural Texture Puppeteer: A Framework for Neural Geometry and Texture
Rendering of Articulated Shapes, Enabling Re-Identification at Interactive
Speed
- URL: http://arxiv.org/abs/2311.17109v1
- Date: Tue, 28 Nov 2023 10:51:05 GMT
- Title: Neural Texture Puppeteer: A Framework for Neural Geometry and Texture
Rendering of Articulated Shapes, Enabling Re-Identification at Interactive
Speed
- Authors: Urs Waldmann, Ole Johannsen, Bastian Goldluecke
- Abstract summary: We present a neural rendering pipeline for textured articulated shapes that we call Neural Texture Puppeteer.
A texture auto-encoder makes use of this information to encode textured images into a global latent code.
Our method can be applied to endangered species where data is limited.
- Score: 2.8544822698499255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a neural rendering pipeline for textured
articulated shapes that we call Neural Texture Puppeteer. Our method separates
geometry and texture encoding. The geometry pipeline learns to capture spatial
relationships on the surface of the articulated shape from ground truth data
that provides this geometric information. A texture auto-encoder makes use of
this information to encode textured images into a global latent code. This
global texture embedding can be efficiently trained separately from the
geometry, and used in a downstream task to identify individuals. The neural
texture rendering and the identification of individuals run at interactive
speeds. To the best of our knowledge, we are the first to offer a promising
alternative to CNN- or transformer-based approaches for re-identification of
articulated individuals based on neural rendering. Realistic looking novel view
and pose synthesis for different synthetic cow textures further demonstrate the
quality of our method. Restricted by the availability of ground truth data for
the articulated shape's geometry, the quality for real-world data synthesis is
reduced. We further demonstrate the flexibility of our model for real-world
data by applying a synthetic to real-world texture domain shift where we
reconstruct the texture from a real-world 2D RGB image. Thus, our method can be
applied to endangered species where data is limited. Our novel synthetic
texture dataset NePuMoo is publicly available to inspire further development in
the field of neural rendering-based re-identification.
Related papers
- TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - Learning Locally Editable Virtual Humans [37.95173373011365]
We propose a novel hybrid representation and end-to-end trainable network architecture to model fully editable neural avatars.
At the core of our work lies a representation that combines the modeling power of neural fields with the ease of use and inherent 3D consistency of skinned meshes.
Our method generates diverse detailed avatars and achieves better model fitting performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-04-28T23:06:17Z) - NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for
Geometry and Texture Editing [39.71252429542249]
We present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices.
We develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation.
Experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability.
arXiv Detail & Related papers (2022-07-25T05:30:50Z) - GeoNeRF: Generalizing NeRF with Geometry Priors [2.578242050187029]
We present GeoNeRF, a generalizable photorealistic novel view method based on neural radiance fields.
Our approach consists of two main stages: a geometry reasoner and a synthesis.
Experiments show that GeoNeRF outperforms state-of-the-art generalizable neural rendering models on various synthetic and real datasets.
arXiv Detail & Related papers (2021-11-26T15:15:37Z) - Texture Generation with Neural Cellular Automata [64.70093734012121]
We learn a texture generator from a single template image.
We make claims that the behaviour exhibited by the NCA model is a learned, distributed, local algorithm to generate a texture.
arXiv Detail & Related papers (2021-05-15T22:05:46Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - Texture Interpolation for Probing Visual Perception [4.637185817866918]
We show that distributions of deep convolutional neural network (CNN) activations of a texture are well described by elliptical distributions.
We then propose the natural geodesics arising with the optimal transport metric to interpolate between arbitrary textures.
Compared to other CNN-based approaches, our method appears to match more closely the geometry of texture perception.
arXiv Detail & Related papers (2020-06-05T21:28:36Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.