Revisiting the Role of Texture in 3D Person Re-identification
- URL: http://arxiv.org/abs/2410.00348v1
- Date: Tue, 1 Oct 2024 02:47:34 GMT
- Title: Revisiting the Role of Texture in 3D Person Re-identification
- Authors: Huy Nguyen, Kien Nguyen, Akila Pemasiri, Sridha Sridharan, Clinton Fookes,
- Abstract summary: This study introduces a new framework for 3D person re-identification (re-ID)
We propose a method to emphasize texture in 3D person re-ID models by incorporating UVTexture mapping.
In particular, the visualization and explanation are achieved through activation maps and attribute-based attention maps.
- Score: 38.1484941424058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduces a new framework for 3D person re-identification (re-ID) that leverages readily available high-resolution texture data in 3D reconstruction to improve the performance and explainability of the person re-ID task. We propose a method to emphasize texture in 3D person re-ID models by incorporating UVTexture mapping, which better differentiates human subjects. Our approach uniquely combines UVTexture and its heatmaps with 3D models to visualize and explain the person re-ID process. In particular, the visualization and explanation are achieved through activation maps and attribute-based attention maps, which highlight the important regions and features contributing to the person re-ID decision. Our contributions include: (1) a novel technique for emphasizing texture in 3D models using UVTexture processing, (2) an innovative method for explicating person re-ID matches through a combination of 3D models and UVTexture mapping, and (3) achieving state-of-the-art performance in 3D person re-ID. We ensure the reproducibility of our results by making all data, codes, and models publicly available.
Related papers
- VCD-Texture: Variance Alignment based 3D-2D Co-Denoising for Text-Guided Texturing [22.39760469467524]
We propose a Variance texture synthesis to address the modal gap between the 2D and 3D diffusion models.
We present an inpainting module to improve details with conflicting regions.
arXiv Detail & Related papers (2024-07-05T12:11:33Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - UVMap-ID: A Controllable and Personalized UV Map Generative Model [67.71022515856653]
We introduce UVMap-ID, a controllable and personalized UV Map generative model.
Unlike traditional large-scale training methods in 2D, we propose to fine-tune a pre-trained text-to-image diffusion model.
Both quantitative and qualitative analyses demonstrate the effectiveness of our method in controllable and personalized UV Map generation.
arXiv Detail & Related papers (2024-04-22T20:30:45Z) - 3D Face Reconstruction Using A Spectral-Based Graph Convolution Encoder [3.749406324648861]
We propose an innovative approach that integrates existing 2D features with 3D features to guide the model learning process.
Our model is trained using 2D-3D data pairs from a combination of datasets and achieves state-of-the-art performance on the NoW benchmark.
arXiv Detail & Related papers (2024-03-08T11:09:46Z) - Nuvo: Neural UV Mapping for Unruly 3D Representations [61.87715912587394]
Existing UV mapping algorithms operate on geometry produced by state-of-the-art 3D reconstruction and generation techniques.
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
arXiv Detail & Related papers (2023-12-11T18:58:38Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - TeCH: Text-guided Reconstruction of Lifelike Clothed Humans [35.68114652041377]
Existing methods often generate overly smooth back-side surfaces with a blurry texture.
Motivated by the power of foundation models, TeCH reconstructs the 3D human by leveraging descriptive text prompts.
We propose a hybrid 3D representation based on DMTet, which consists of an explicit body shape grid and an implicit distance field.
arXiv Detail & Related papers (2023-08-16T17:59:13Z) - T2TD: Text-3D Generation Model based on Prior Knowledge Guidance [74.32278935880018]
We propose a novel text-3D generation model (T2TD), which introduces the related shapes or textual information as the prior knowledge to improve the performance of the 3D generation model.
Our approach significantly improves 3D model generation quality and outperforms the SOTA methods on the text2shape datasets.
arXiv Detail & Related papers (2023-05-25T06:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.