Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics
- URL: http://arxiv.org/abs/2412.08188v1
- Date: Wed, 11 Dec 2024 08:27:33 GMT
- Title: Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics
- Authors: Kaiwei Zhang, Dandan Zhu, Xiongkuo Min, Guangtao Zhai,
- Abstract summary: We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.
Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
- Score: 50.23625950905638
- License:
- Abstract: Textured meshes significantly enhance the realism and detail of objects by mapping intricate texture details onto the geometric structure of 3D models. This advancement is valuable across various applications, including entertainment, education, and industry. While traditional mesh saliency studies focus on non-textured meshes, our work explores the complexities introduced by detailed texture patterns. We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment. This dataset addresses the limitations of previous studies by providing comprehensive eye-tracking data from multiple viewpoints, thereby advancing our understanding of human visual behavior and supporting more accurate and effective 3D content creation. Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region. The model incorporates a texture alignment module and a geometric extraction module, combined with an aggregation module to integrate texture and geometry for precise saliency prediction. We believe this approach will enhance the visual fidelity of geometric processing algorithms while ensuring efficient use of computational resources, which is crucial for real-time rendering and high-detail applications such as VR and gaming.
Related papers
- Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation [39.702921832009466]
We introduce a new method that incorporates touch as an additional modality to improve the geometric details of generated 3D assets.
We design a lightweight 3D texture field to synthesize visual and tactile textures, guided by 2D diffusion model priors.
We are the first to leverage high-resolution tactile sensing to enhance geometric details for 3D generation tasks.
arXiv Detail & Related papers (2024-12-09T18:59:45Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - ShaDDR: Interactive Example-Based Geometry and Texture Generation via 3D
Shape Detailization and Differentiable Rendering [24.622120688131616]
ShaDDR is an example-based deep generative neural network which produces a high-resolution textured 3D shape.
Our method learns to detailize the geometry via multi-resolution voxel upsampling and generate textures on voxel surfaces.
The generated shape preserves the overall structure of the input coarse voxel model.
arXiv Detail & Related papers (2023-06-08T02:35:30Z) - Mesh2Tex: Generating Mesh Textures from Image Queries [45.32242590651395]
In particular, textured stage textures from images of real objects match real images observations.
We present Mesh2Tex, which learns object geometry from uncorrelated collections of 3D object geometry.
arXiv Detail & Related papers (2023-04-12T13:58:25Z) - Learning Neural Implicit Representations with Surface Signal
Parameterizations [14.835882967340968]
We present a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data.
Our model remains compatible with existing mesh-based digital content with appearance data.
arXiv Detail & Related papers (2022-11-01T15:10:58Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.