Subjective and Objective Visual Quality Assessment of Textured 3D Meshes
- URL: http://arxiv.org/abs/2102.03982v1
- Date: Mon, 8 Feb 2021 03:26:41 GMT
- Title: Subjective and Objective Visual Quality Assessment of Textured 3D Meshes
- Authors: Jinjiang Guo, Vincent Vidal, Irene Cheng, Anup Basu, Atilla Baskurt,
Guillaume Lavoue
- Abstract summary: We present a new subjective study to evaluate the perceptual quality of textured meshes, based on a paired comparison protocol.
We propose two new metrics for visual quality assessment of textured mesh, as optimized linear combinations of accurate geometry and texture quality measurements.
- Score: 3.738515725866836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective visual quality assessment of 3D models is a fundamental issue in
computer graphics. Quality assessment metrics may allow a wide range of
processes to be guided and evaluated, such as level of detail creation,
compression, filtering, and so on. Most computer graphics assets are composed
of geometric surfaces on which several texture images can be mapped to 11 make
the rendering more realistic. While some quality assessment metrics exist for
geometric surfaces, almost no research has been conducted on the evaluation of
texture-mapped 3D models. In this context, we present a new subjective study to
evaluate the perceptual quality of textured meshes, based on a paired
comparison protocol. We introduce both texture and geometry distortions on a
set of 5 reference models to produce a database of 136 distorted models,
evaluated using two rendering protocols. Based on analysis of the results, we
propose two new metrics for visual quality assessment of textured mesh, as
optimized linear combinations of accurate geometry and texture quality
measurements. These proposed perceptual metrics outperform their counterparts
in terms of correlation with human opinion. The database, along with the
associated subjective scores, will be made publicly available online.
Related papers
- Real-time Free-view Human Rendering from Sparse-view RGB Videos using Double Unprojected Textures [87.80984588545589]
Real-time free-view human rendering from sparse-view RGB inputs is a challenging task due to the sensor scarcity and the tight time budget.
Recent methods leverage 2D CNNs operating in texture space to learn rendering primitives.
We present Double Unprojected Textures, which at the core disentangles coarse geometric deformation estimation from appearance synthesis.
arXiv Detail & Related papers (2024-12-17T18:57:38Z) - ConvMesh: Reimagining Mesh Quality Through Convex Optimization [55.2480439325792]
This research introduces a convex optimization programming called disciplined convex programming to enhance existing meshes.
By focusing on a sparse set of point clouds from both the original and target meshes, this method demonstrates significant improvements in mesh quality with minimal data requirements.
arXiv Detail & Related papers (2024-12-11T15:48:25Z) - Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics [50.23625950905638]
We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.
Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
arXiv Detail & Related papers (2024-12-11T08:27:33Z) - HybridMQA: Exploring Geometry-Texture Interactions for Colored Mesh Quality Assessment [7.526258700061012]
Mesh quality assessment (MQA) models play a critical role in the design, optimization, and evaluation of mesh operation systems.
We introduce HybridMQA, a hybrid full-reference colored MQA framework that integrates model-based and projection-based approaches.
Our method employs graph learning to extract detailed 3D representations, which are then projected to 2D using a novel feature rendering process.
arXiv Detail & Related papers (2024-12-02T21:35:33Z) - MaRINeR: Enhancing Novel Views by Matching Rendered Images with Nearby References [49.71130133080821]
MaRINeR is a refinement method that leverages information of a nearby mapping image to improve the rendering of a target viewpoint.
We show improved renderings in quantitative metrics and qualitative examples from both explicit and implicit scene representations.
arXiv Detail & Related papers (2024-07-18T17:50:03Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - No-Reference Quality Assessment for Colored Point Cloud and Mesh Based
on Natural Scene Statistics [36.017914479449864]
We propose an NSS-based no-reference quality assessment metric for colored 3D models.
Our method is mainly validated on the colored point cloud quality assessment database (SJTU-PCQA) and the colored mesh quality assessment database (CMDM)
arXiv Detail & Related papers (2021-07-05T14:03:15Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.