Subjective and Objective Visual Quality Assessment of Textured 3D Meshes
- URL: http://arxiv.org/abs/2102.03982v1
- Date: Mon, 8 Feb 2021 03:26:41 GMT
- Title: Subjective and Objective Visual Quality Assessment of Textured 3D Meshes
- Authors: Jinjiang Guo, Vincent Vidal, Irene Cheng, Anup Basu, Atilla Baskurt,
Guillaume Lavoue
- Abstract summary: We present a new subjective study to evaluate the perceptual quality of textured meshes, based on a paired comparison protocol.
We propose two new metrics for visual quality assessment of textured mesh, as optimized linear combinations of accurate geometry and texture quality measurements.
- Score: 3.738515725866836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective visual quality assessment of 3D models is a fundamental issue in
computer graphics. Quality assessment metrics may allow a wide range of
processes to be guided and evaluated, such as level of detail creation,
compression, filtering, and so on. Most computer graphics assets are composed
of geometric surfaces on which several texture images can be mapped to 11 make
the rendering more realistic. While some quality assessment metrics exist for
geometric surfaces, almost no research has been conducted on the evaluation of
texture-mapped 3D models. In this context, we present a new subjective study to
evaluate the perceptual quality of textured meshes, based on a paired
comparison protocol. We introduce both texture and geometry distortions on a
set of 5 reference models to produce a database of 136 distorted models,
evaluated using two rendering protocols. Based on analysis of the results, we
propose two new metrics for visual quality assessment of textured mesh, as
optimized linear combinations of accurate geometry and texture quality
measurements. These proposed perceptual metrics outperform their counterparts
in terms of correlation with human opinion. The database, along with the
associated subjective scores, will be made publicly available online.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - MaRINeR: Enhancing Novel Views by Matching Rendered Images with Nearby References [49.71130133080821]
MaRINeR is a refinement method that leverages information of a nearby mapping image to improve the rendering of a target viewpoint.
We show improved renderings in quantitative metrics and qualitative examples from both explicit and implicit scene representations.
arXiv Detail & Related papers (2024-07-18T17:50:03Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - SJTU-TMQA: A quality assessment database for static mesh with texture
map [28.821971310570436]
We create a large-scale textured mesh quality assessment database, namely SJTU-TMQA, which includes 21 reference meshes and 945 distorted samples.
13 state-of-the-art objective metrics are evaluated on SJTU-TMQA. The results report the highest correlation of around 0.6, indicating the need for more effective objective metrics.
arXiv Detail & Related papers (2023-09-27T14:18:04Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - From 2D to 3D: Re-thinking Benchmarking of Monocular Depth Prediction [80.67873933010783]
We argue that MDP is currently witnessing benchmark over-fitting and relying on metrics that are only partially helpful to gauge the usefulness of the predictions for 3D applications.
This limits the design and development of novel methods that are truly aware of - and improving towards estimating - the 3D structure of the scene rather than optimizing 2D-based distances.
We propose a set of metrics well suited to evaluate the 3D geometry of MDP approaches and a novel indoor benchmark, RIO-D3D, crucial for the proposed evaluation methodology.
arXiv Detail & Related papers (2022-03-15T17:50:54Z) - No-Reference Quality Assessment for Colored Point Cloud and Mesh Based
on Natural Scene Statistics [36.017914479449864]
We propose an NSS-based no-reference quality assessment metric for colored 3D models.
Our method is mainly validated on the colored point cloud quality assessment database (SJTU-PCQA) and the colored mesh quality assessment database (CMDM)
arXiv Detail & Related papers (2021-07-05T14:03:15Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.