AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and Results
- URL: http://arxiv.org/abs/2408.11982v3
- Date: Tue, 22 Oct 2024 16:58:09 GMT
- Title: AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and Results
- Authors: Maksim Smirnov, Aleksandr Gushchin, Anastasia Antsiferova, Dmitry Vatolin, Radu Timofte, Ziheng Jia, Zicheng Zhang, Wei Sun, Jiaying Qian, Yuqin Cao, Yinan Sun, Yuxin Zhu, Xiongkuo Min, Guangtao Zhai, Kanjar De, Qing Luo, Ao-Xiang Zhang, Peng Zhang, Haibo Lei, Linyan Jiang, Yaqing Li, Wenhui Meng, Zhenzhong Chen, Zhengxue Cheng, Jiahao Xiao, Jun Xu, Chenlong He, Qi Zheng, Ruoxi Zhu, Min Li, Yibo Fan, Zhengzhong Tu,
- Abstract summary: This paper presents the results of the Compressed Video Quality Assessment challenge, held in conjunction with the Advances in Image Manipulation (AIM) workshop at ECCV 2024.
The challenge aimed to evaluate the performance of VQA methods on a diverse dataset of 459 videos encoded with 14 codecs of various compression standards.
- Score: 120.95863275142727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video quality assessment (VQA) is a crucial task in the development of video compression standards, as it directly impacts the viewer experience. This paper presents the results of the Compressed Video Quality Assessment challenge, held in conjunction with the Advances in Image Manipulation (AIM) workshop at ECCV 2024. The challenge aimed to evaluate the performance of VQA methods on a diverse dataset of 459 videos, encoded with 14 codecs of various compression standards (AVC/H.264, HEVC/H.265, AV1, and VVC/H.266) and containing a comprehensive collection of compression artifacts. To measure the methods performance, we employed traditional correlation coefficients between their predictions and subjective scores, which were collected via large-scale crowdsourced pairwise human comparisons. For training purposes, participants were provided with the Compressed Video Quality Assessment Dataset (CVQAD), a previously developed dataset of 1022 videos. Up to 30 participating teams registered for the challenge, while we report the results of 6 teams, which submitted valid final solutions and code for reproducing the results. Moreover, we calculated and present the performance of state-of-the-art VQA methods on the developed dataset, providing a comprehensive benchmark for future research. The dataset, results, and online leaderboard are publicly available at https://challenges.videoprocessing.ai/challenges/compressedvideo-quality-assessment.html.
Related papers
- EVQAScore: Efficient Video Question Answering Data Evaluation [23.812020049901452]
We introduce EVQAScore, a reference-free method that leverages keyword extraction to assess both video caption and video QA data quality.
Our approach achieves state-of-the-art (SOTA) performance (32.8 for Kendall correlation and 42.3 for Spearman correlation, 4.7 and 5.9 higher than the previous method PAC-S++, for video caption evaluation)
By using EVQAScore for data selection, we achieved SOTA results with only 12.5% of the original data volume, outperforming the previous SOTA method PAC-S and 100% of data.
arXiv Detail & Related papers (2024-11-11T12:11:36Z) - AIM 2024 Challenge on Video Super-Resolution Quality Assessment: Methods and Results [76.64868221556145]
This paper presents the Video Super-Resolution (SR) Quality Assessment (QA) Challenge that was part of the Advances in Image Manipulation (AIM) workshop.
The task of this challenge was to develop an objective QA method for videos upscaled 2x and 4x by modern image- and video-SR algorithms.
The goal was to advance the state-of-the-art in SR QA, which had proven to be a challenging problem with limited applicability of traditional QA methods.
arXiv Detail & Related papers (2024-10-05T16:42:23Z) - AIM 2024 Challenge on Video Saliency Prediction: Methods and Results [105.09572982350532]
This paper reviews the Challenge on Video Saliency Prediction at AIM 2024.
The goal of the participants was to develop a method for predicting accurate saliency maps for the provided set of video sequences.
arXiv Detail & Related papers (2024-09-23T08:59:22Z) - AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and Results [140.47245070508353]
This paper reviews the AIS 2024 Video Quality Assessment (VQA) Challenge, focused on User-Generated Content (UGC)
The aim of this challenge is to gather deep learning-based methods capable of estimating perceptual quality of videos.
The user-generated videos from the YouTube dataset include diverse content (sports, games, lyrics, anime, etc.), quality and resolutions.
arXiv Detail & Related papers (2024-04-24T21:02:14Z) - NTIRE 2024 Challenge on Short-form UGC Video Quality Assessment: Methods and Results [216.73187673659675]
This paper reviews the NTIRE 2024 Challenge on Shortform Video Quality Assessment (S-UGC VQA)
The KVQ database is divided into three parts, including 2926 videos for training, 420 videos for validation, and 854 videos for testing.
The purpose is to build new benchmarks and advance the development of S-UGC VQA.
arXiv Detail & Related papers (2024-04-17T12:26:13Z) - KVQ: Kwai Video Quality Assessment for Short-form Videos [24.5291786508361]
We establish the first large-scale Kaleidoscope short Video database for Quality assessment, KVQ, which comprises 600 user-uploaded short videos and 3600 processed videos.
We propose the first short-form video quality evaluator, i.e., KSVQE, which enables the quality evaluator to identify the quality-determined semantics with the content understanding of large vision language models.
arXiv Detail & Related papers (2024-02-11T14:37:54Z) - NTIRE 2023 Quality Assessment of Video Enhancement Challenge [97.809937484099]
This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement Challenge.
The challenge is to address a major challenge in the field of video processing, namely, video quality assessment (VQA) for enhanced videos.
The challenge has a total of 167 registered participants.
arXiv Detail & Related papers (2023-07-19T02:33:42Z) - Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets
Training [20.288424566444224]
We focus on automatically assessing the quality of in-the-wild videos in computer vision applications.
To improve the performance of quality assessment models, we borrow intuitions from human perception.
We propose a mixed datasets training strategy for training a single VQA model with multiple datasets.
arXiv Detail & Related papers (2020-11-09T09:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.