Geometry-Aware Video Quality Assessment for Dynamic Digital Human
- URL: http://arxiv.org/abs/2310.15984v1
- Date: Tue, 24 Oct 2023 16:34:03 GMT
- Title: Geometry-Aware Video Quality Assessment for Dynamic Digital Human
- Authors: Zicheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, and Guangtao Zhai
- Abstract summary: We propose a novel no-reference (NR) geometry-aware video quality assessment method for DDH-QA challenge.
The proposed method achieves state-of-the-art performance on the DDH-QA database.
- Score: 56.17852258306602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic Digital Humans (DDHs) are 3D digital models that are animated using
predefined motions and are inevitably bothered by noise/shift during the
generation process and compression distortion during the transmission process,
which needs to be perceptually evaluated. Usually, DDHs are displayed as 2D
rendered animation videos and it is natural to adapt video quality assessment
(VQA) methods to DDH quality assessment (DDH-QA) tasks. However, the VQA
methods are highly dependent on viewpoints and less sensitive to geometry-based
distortions. Therefore, in this paper, we propose a novel no-reference (NR)
geometry-aware video quality assessment method for DDH-QA challenge. Geometry
characteristics are described by the statistical parameters estimated from the
DDHs' geometry attribute distributions. Spatial and temporal features are
acquired from the rendered videos. Finally, all kinds of features are
integrated and regressed into quality values. Experimental results show that
the proposed method achieves state-of-the-art performance on the DDH-QA
database.
Related papers
- A No-Reference Quality Assessment Method for Digital Human Head [56.17852258306602]
We develop a novel no-reference (NR) method based on Transformer to deal with digital human quality assessment (DHQA)
Specifically, the front 2D projections of the digital humans are rendered as inputs and the vision transformer (ViT) is employed for the feature extraction.
Then we design a multi-task module to jointly classify the distortion types and predict the perceptual quality levels of digital humans.
arXiv Detail & Related papers (2023-10-25T16:01:05Z) - Advancing Zero-Shot Digital Human Quality Assessment through
Text-Prompted Evaluation [60.873105678086404]
SJTU-H3D is a subjective quality assessment database specifically designed for full-body digital humans.
It comprises 40 high-quality reference digital humans and 1,120 labeled distorted counterparts generated with seven types of distortions.
arXiv Detail & Related papers (2023-07-06T06:55:30Z) - DDH-QA: A Dynamic Digital Humans Quality Assessment Database [55.69700918818879]
We construct a large-scale dynamic digital human quality assessment database with diverse motion content as well as multiple distortions.
Ten types of common motion are employed to drive the DDHs and a total of 800 DDHs are generated in the end.
arXiv Detail & Related papers (2022-12-24T13:35:31Z) - Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric [58.309735075960745]
This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
arXiv Detail & Related papers (2022-08-30T08:59:41Z) - Exploring the Effectiveness of Video Perceptual Representation in Blind
Video Quality Assessment [55.65173181828863]
We propose a temporal perceptual quality index (TPQI) to measure the temporal distortion by describing the graphic morphology of the representation.
Experiments show that TPQI is an effective way of predicting subjective temporal quality.
arXiv Detail & Related papers (2022-07-08T07:30:51Z) - No-Reference Quality Assessment for Colored Point Cloud and Mesh Based
on Natural Scene Statistics [36.017914479449864]
We propose an NSS-based no-reference quality assessment metric for colored 3D models.
Our method is mainly validated on the colored point cloud quality assessment database (SJTU-PCQA) and the colored mesh quality assessment database (CMDM)
arXiv Detail & Related papers (2021-07-05T14:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.