Deep Quality Assessment of Compressed Videos: A Subjective and Objective
Study
- URL: http://arxiv.org/abs/2205.03630v1
- Date: Sat, 7 May 2022 10:50:06 GMT
- Title: Deep Quality Assessment of Compressed Videos: A Subjective and Objective
Study
- Authors: Liqun Lin, Zheng Wang, Jiachen He, Weiling Chen, Yiwen Xu and Tiesong
Zhao
- Abstract summary: In the video coding process, the perceived quality of a compressed video is evaluated by full-reference quality evaluation metrics.
To solve this problem, it is critical to design no-reference compressed video quality assessment algorithms.
In this work, a semi-automatic labeling method is adopted to build a large-scale compressed video quality database.
- Score: 23.3509109592315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the video coding process, the perceived quality of a compressed video is
evaluated by full-reference quality evaluation metrics. However, it is
difficult to obtain reference videos with perfect quality. To solve this
problem, it is critical to design no-reference compressed video quality
assessment algorithms, which assists in measuring the quality of experience on
the server side and resource allocation on the network side. Convolutional
Neural Network (CNN) has shown its advantage in Video Quality Assessment (VQA)
with promising successes in recent years. A large-scale quality database is
very important for learning accurate and powerful compressed video quality
metrics. In this work, a semi-automatic labeling method is adopted to build a
large-scale compressed video quality database, which allows us to label a large
number of compressed videos with manageable human workload. The resulting
Compressed Video quality database with Semi-Automatic Ratings (CVSAR), so far
the largest of compressed video quality database. We train a no-reference
compressed video quality assessment model with a 3D CNN for SpatioTemporal
Feature Extraction and Evaluation (STFEE). Experimental results demonstrate
that the proposed method outperforms state-of-the-art metrics and achieves
promising generalization performance in cross-database tests. The CVSAR
database and STFEE model will be made publicly available to facilitate
reproducible research.
Related papers
- ESVQA: Perceptual Quality Assessment of Egocentric Spatial Videos [71.62145804686062]
We introduce the first Egocentric Spatial Video Quality Assessment Database (ESVQAD), which comprises 600 egocentric spatial videos and their mean opinion scores (MOSs)
We propose a novel multi-dimensional binocular feature fusion model, termed ESVQAnet, which integrates binocular spatial, motion, and semantic features to predict the perceptual quality.
Experimental results demonstrate the ESVQAnet outperforms 16 state-of-the-art VQA models on the embodied perceptual quality assessment task.
arXiv Detail & Related papers (2024-12-29T10:13:30Z) - FineVQ: Fine-Grained User Generated Content Video Quality Assessment [57.51274708410407]
We establish the first large-scale Fine-grained Video quality assessment Database, termed FineVD, which comprises 6104 videos with fine-grained quality scores and descriptions across multiple dimensions.
We propose a Fine-grained Video Quality assessment (FineVQ) model to learn the fine-grained quality of videos, with the capabilities of quality rating, quality scoring, and quality attribution.
arXiv Detail & Related papers (2024-12-26T14:44:47Z) - Benchmarking Multi-dimensional AIGC Video Quality Assessment: A Dataset and Unified Model [56.03592388332793]
We investigate the AIGC-VQA problem, considering both subjective and objective quality assessment perspectives.
For the subjective perspective, we construct the Large-scale Generated Video Quality assessment (LGVQ) dataset, consisting of 2,808 AIGC videos.
We evaluate the perceptual quality of AIGC videos from three critical dimensions: spatial quality, temporal quality, and text-video alignment.
We propose the Unify Generated Video Quality assessment (UGVQ) model, designed to accurately evaluate the multi-dimensional quality of AIGC videos.
arXiv Detail & Related papers (2024-07-31T07:54:26Z) - CLIPVQA:Video Quality Assessment via CLIP [56.94085651315878]
We propose an efficient CLIP-based Transformer method for the VQA problem ( CLIPVQA)
The proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods.
arXiv Detail & Related papers (2024-07-06T02:32:28Z) - RMT-BVQA: Recurrent Memory Transformer-based Blind Video Quality Assessment for Enhanced Video Content [7.283653823423298]
We propose a novel blind deep video quality assessment (VQA) method specifically for enhanced video content.
It employs a new Recurrent Memory Transformer (RMT) based network architecture to obtain video quality representations.
The extracted quality representations are then combined through linear regression to generate video-level quality indices.
arXiv Detail & Related papers (2024-05-14T14:01:15Z) - KVQ: Kwai Video Quality Assessment for Short-form Videos [24.5291786508361]
We establish the first large-scale Kaleidoscope short Video database for Quality assessment, KVQ, which comprises 600 user-uploaded short videos and 3600 processed videos.
We propose the first short-form video quality evaluator, i.e., KSVQE, which enables the quality evaluator to identify the quality-determined semantics with the content understanding of large vision language models.
arXiv Detail & Related papers (2024-02-11T14:37:54Z) - Perceptual Video Quality Assessment: A Survey [63.61214597655413]
Perceptual video quality assessment plays a vital role in the field of video processing.
Various subjective and objective video quality assessment studies have been conducted over the past two decades.
This survey provides an up-to-date and comprehensive review of these video quality assessment studies.
arXiv Detail & Related papers (2024-02-05T16:13:52Z) - Blindly Assess Quality of In-the-Wild Videos via Quality-aware
Pre-training and Motion Perception [32.87570883484805]
We propose to transfer knowledge from image quality assessment (IQA) databases with authentic distortions and large-scale action recognition with rich motion patterns.
We train the proposed model on the target VQA databases using a mixed list-wise ranking loss function.
arXiv Detail & Related papers (2021-08-19T05:29:19Z) - RAPIQUE: Rapid and Accurate Video Quality Prediction of User Generated
Content [44.03188436272383]
We introduce an effective and efficient video quality model for content, which we dub the Rapid and Accurate Video Quality Evaluator (RAPIQUE)
RAPIQUE combines and leverages the advantages of both quality-aware scene statistics features and semantics-aware deep convolutional features.
Our experimental results on recent large-scale video quality databases show that RAPIQUE delivers top performances on all the datasets at a considerably lower computational expense.
arXiv Detail & Related papers (2021-01-26T17:23:46Z) - UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated
Content [59.13821614689478]
Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of content are unpredictable, complicated, and often commingled.
Here we contribute to advancing the problem by conducting a comprehensive evaluation of leading VQA models.
By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models.
Our experimental results show that VIDEVAL achieves state-of-theart performance at considerably lower computational cost than other leading models.
arXiv Detail & Related papers (2020-05-29T00:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.