SB-VQA: A Stack-Based Video Quality Assessment Framework for Video
Enhancement
- URL: http://arxiv.org/abs/2305.08408v1
- Date: Mon, 15 May 2023 07:44:10 GMT
- Title: SB-VQA: A Stack-Based Video Quality Assessment Framework for Video
Enhancement
- Authors: Ding-Jiun Huang, Yu-Ting Kao, Tieh-Hung Chuang, Ya-Chun Tsai, Jing-Kai
Lou, Shuen-Huei Guan
- Abstract summary: We propose a stack-based framework for video quality assessment (VQA) that outperforms existing state-of-the-art methods on enhanced videos.
In addition to proposing the VQA framework for enhanced videos, we also investigate its application on professionally generated content (PGC)
Our experiments demonstrate that existing VQA algorithms can be applied to PGC videos, and we find that VQA performance for PGC videos can be improved by considering the plot of a play.
- Score: 0.40777876591043155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, several video quality assessment (VQA) methods have been
developed, achieving high performance. However, these methods were not
specifically trained for enhanced videos, which limits their ability to predict
video quality accurately based on human subjective perception. To address this
issue, we propose a stack-based framework for VQA that outperforms existing
state-of-the-art methods on VDPVE, a dataset consisting of enhanced videos. In
addition to proposing the VQA framework for enhanced videos, we also
investigate its application on professionally generated content (PGC). To
address copyright issues with premium content, we create the PGCVQ dataset,
which consists of videos from YouTube. We evaluate our proposed approach and
state-of-the-art methods on PGCVQ, and provide new insights on the results. Our
experiments demonstrate that existing VQA algorithms can be applied to PGC
videos, and we find that VQA performance for PGC videos can be improved by
considering the plot of a play, which highlights the importance of video
semantic understanding.
Related papers
- VQA$^2$:Visual Question Answering for Video Quality Assessment [76.81110038738699]
Video Quality Assessment originally focused on quantitative video quality scoring.
It is now evolving towards more comprehensive visual quality understanding tasks.
We introduce the first visual question answering instruction dataset entirely focuses on video quality assessment.
We conduct extensive experiments on both video quality scoring and video quality understanding tasks.
arXiv Detail & Related papers (2024-11-06T09:39:52Z) - AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and Results [120.95863275142727]
This paper presents the results of the Compressed Video Quality Assessment challenge, held in conjunction with the Advances in Image Manipulation (AIM) workshop at ECCV 2024.
The challenge aimed to evaluate the performance of VQA methods on a diverse dataset of 459 videos encoded with 14 codecs of various compression standards.
arXiv Detail & Related papers (2024-08-21T20:32:45Z) - Benchmarking AIGC Video Quality Assessment: A Dataset and Unified Model [54.69882562863726]
We try to systemically investigate the AIGC-VQA problem from both subjective and objective quality assessment perspectives.
We evaluate the perceptual quality of AIGC videos from three dimensions: spatial quality, temporal quality, and text-to-video alignment.
We propose a Unify Generated Video Quality assessment (UGVQ) model to comprehensively and accurately evaluate the quality of AIGC videos.
arXiv Detail & Related papers (2024-07-31T07:54:26Z) - CLIPVQA:Video Quality Assessment via CLIP [56.94085651315878]
We propose an efficient CLIP-based Transformer method for the VQA problem ( CLIPVQA)
The proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods.
arXiv Detail & Related papers (2024-07-06T02:32:28Z) - MD-VQA: Multi-Dimensional Quality Assessment for UGC Live Videos [39.06800945430703]
We build a first-of-a-kind subjective Live VQA database and develop an effective evaluation tool.
textbfMD-VQA achieves state-of-the-art performance on both our Live VQA database and existing compressed VQA databases.
arXiv Detail & Related papers (2023-03-27T06:17:10Z) - VRAG: Region Attention Graphs for Content-Based Video Retrieval [85.54923500208041]
Region Attention Graph Networks (VRAG) improves the state-of-the-art video-level methods.
VRAG represents videos at a finer granularity via region-level features and encodes video-temporal dynamics through region-level relations.
We show that the performance gap between video-level and frame-level methods can be reduced by segmenting videos into shots and using shot embeddings for video retrieval.
arXiv Detail & Related papers (2022-05-18T16:50:45Z) - UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated
Content [59.13821614689478]
Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of content are unpredictable, complicated, and often commingled.
Here we contribute to advancing the problem by conducting a comprehensive evaluation of leading VQA models.
By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models.
Our experimental results show that VIDEVAL achieves state-of-theart performance at considerably lower computational cost than other leading models.
arXiv Detail & Related papers (2020-05-29T00:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.