Perceptual Video Quality Assessment: A Survey
- URL: http://arxiv.org/abs/2402.03413v1
- Date: Mon, 5 Feb 2024 16:13:52 GMT
- Title: Perceptual Video Quality Assessment: A Survey
- Authors: Xiongkuo Min, Huiyu Duan, Wei Sun, Yucheng Zhu, Guangtao Zhai
- Abstract summary: Perceptual video quality assessment plays a vital role in the field of video processing.
Various subjective and objective video quality assessment studies have been conducted over the past two decades.
This survey provides an up-to-date and comprehensive review of these video quality assessment studies.
- Score: 63.61214597655413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perceptual video quality assessment plays a vital role in the field of video
processing due to the existence of quality degradations introduced in various
stages of video signal acquisition, compression, transmission and display. With
the advancement of internet communication and cloud service technology, video
content and traffic are growing exponentially, which further emphasizes the
requirement for accurate and rapid assessment of video quality. Therefore,
numerous subjective and objective video quality assessment studies have been
conducted over the past two decades for both generic videos and specific videos
such as streaming, user-generated content (UGC), 3D, virtual and augmented
reality (VR and AR), high frame rate (HFR), audio-visual, etc. This survey
provides an up-to-date and comprehensive review of these video quality
assessment studies. Specifically, we first review the subjective video quality
assessment methodologies and databases, which are necessary for validating the
performance of video quality metrics. Second, the objective video quality
assessment algorithms for general purposes are surveyed and concluded according
to the methodologies utilized in the quality measures. Third, we overview the
objective video quality assessment measures for specific applications and
emerging topics. Finally, the performances of the state-of-the-art video
quality assessment measures are compared and analyzed. This survey provides a
systematic overview of both classical works and recent progresses in the realm
of video quality assessment, which can help other researchers quickly access
the field and conduct relevant research.
Related papers
- Multi-Branch Collaborative Learning Network for Video Quality Assessment in Industrial Video Search [27.0139421302102]
In industrial systems, low-quality video characteristics fall into four categories.
These low-quality videos have been largely overlooked in academic research.
We introduce the Multi-Branch Collaborative Network (MBCN) tailored for industrial video retrieval systems.
arXiv Detail & Related papers (2025-02-09T14:57:25Z) - FineVQ: Fine-Grained User Generated Content Video Quality Assessment [57.51274708410407]
We establish the first large-scale Fine-grained Video quality assessment Database, termed FineVD, which comprises 6104 videos with fine-grained quality scores and descriptions across multiple dimensions.
We propose a Fine-grained Video Quality assessment (FineVQ) model to learn the fine-grained quality of videos, with the capabilities of quality rating, quality scoring, and quality attribution.
arXiv Detail & Related papers (2024-12-26T14:44:47Z) - VQA$^2$: Visual Question Answering for Video Quality Assessment [76.81110038738699]
Video Quality Assessment (VQA) is a classic field in low-level visual perception.
Recent studies in the image domain have demonstrated that Visual Question Answering (VQA) can enhance markedly low-level visual quality evaluation.
We introduce the VQA2 Instruction dataset - the first visual question answering instruction dataset that focuses on video quality assessment.
The VQA2 series models interleave visual and motion tokens to enhance the perception of spatial-temporal quality details in videos.
arXiv Detail & Related papers (2024-11-06T09:39:52Z) - Advancing Video Quality Assessment for AIGC [17.23281750562252]
We propose a novel loss function that combines mean absolute error with cross-entropy loss to mitigate inter-frame quality inconsistencies.
We also introduce the innovative S2CNet technique to retain critical content, while leveraging adversarial training to enhance the model's generalization capabilities.
arXiv Detail & Related papers (2024-09-23T10:36:22Z) - Benchmarking Multi-dimensional AIGC Video Quality Assessment: A Dataset and Unified Model [56.03592388332793]
We investigate the AIGC-VQA problem, considering both subjective and objective quality assessment perspectives.
For the subjective perspective, we construct the Large-scale Generated Video Quality assessment (LGVQ) dataset, consisting of 2,808 AIGC videos.
We evaluate the perceptual quality of AIGC videos from three critical dimensions: spatial quality, temporal quality, and text-video alignment.
We propose the Unify Generated Video Quality assessment (UGVQ) model, designed to accurately evaluate the multi-dimensional quality of AIGC videos.
arXiv Detail & Related papers (2024-07-31T07:54:26Z) - RMT-BVQA: Recurrent Memory Transformer-based Blind Video Quality Assessment for Enhanced Video Content [7.283653823423298]
We propose a novel blind deep video quality assessment (VQA) method specifically for enhanced video content.
It employs a new Recurrent Memory Transformer (RMT) based network architecture to obtain video quality representations.
The extracted quality representations are then combined through linear regression to generate video-level quality indices.
arXiv Detail & Related papers (2024-05-14T14:01:15Z) - Towards A Better Metric for Text-to-Video Generation [102.16250512265995]
Generative models have demonstrated remarkable capability in synthesizing high-quality text, images, and videos.
We introduce a novel evaluation pipeline, the Text-to-Video Score (T2VScore)
This metric integrates two pivotal criteria: (1) Text-Video Alignment, which scrutinizes the fidelity of the video in representing the given text description, and (2) Video Quality, which evaluates the video's overall production caliber with a mixture of experts.
arXiv Detail & Related papers (2024-01-15T15:42:39Z) - Towards Explainable In-the-Wild Video Quality Assessment: A Database and
a Language-Prompted Approach [52.07084862209754]
We collect over two million opinions on 4,543 in-the-wild videos on 13 dimensions of quality-related factors.
Specifically, we ask the subjects to label among a positive, a negative, and a neutral choice for each dimension.
These explanation-level opinions allow us to measure the relationships between specific quality factors and abstract subjective quality ratings.
arXiv Detail & Related papers (2023-05-22T05:20:23Z) - Deep Quality Assessment of Compressed Videos: A Subjective and Objective
Study [23.3509109592315]
In the video coding process, the perceived quality of a compressed video is evaluated by full-reference quality evaluation metrics.
To solve this problem, it is critical to design no-reference compressed video quality assessment algorithms.
In this work, a semi-automatic labeling method is adopted to build a large-scale compressed video quality database.
arXiv Detail & Related papers (2022-05-07T10:50:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.