NTIRE 2023 Quality Assessment of Video Enhancement Challenge
- URL: http://arxiv.org/abs/2307.09729v1
- Date: Wed, 19 Jul 2023 02:33:42 GMT
- Title: NTIRE 2023 Quality Assessment of Video Enhancement Challenge
- Authors: Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu
Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong,
Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying
Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu,
Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma,
Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang,
Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen
Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao,
Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao Liu, Yu Lai, Shanshan Chen, Wenqi
Wang, Haoning Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao,
Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim
Hamidouche, Olivier D\'eforges, Tengfei Shi, Azadeh Mansouri, Hossein
Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
- Abstract summary: This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement Challenge.
The challenge is to address a major challenge in the field of video processing, namely, video quality assessment (VQA) for enhanced videos.
The challenge has a total of 167 registered participants.
- Score: 97.809937484099
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement
Challenge, which will be held in conjunction with the New Trends in Image
Restoration and Enhancement Workshop (NTIRE) at CVPR 2023. This challenge is to
address a major challenge in the field of video processing, namely, video
quality assessment (VQA) for enhanced videos. The challenge uses the VQA
Dataset for Perceptual Video Enhancement (VDPVE), which has a total of 1211
enhanced videos, including 600 videos with color, brightness, and contrast
enhancements, 310 videos with deblurring, and 301 deshaked videos. The
challenge has a total of 167 registered participants. 61 participating teams
submitted their prediction results during the development phase, with a total
of 3168 submissions. A total of 176 submissions were submitted by 37
participating teams during the final testing phase. Finally, 19 participating
teams submitted their models and fact sheets, and detailed the methods they
used. Some methods have achieved better results than baseline methods, and the
winning methods have demonstrated superior prediction performance.
Related papers
- AIM 2024 Challenge on Video Saliency Prediction: Methods and Results [105.09572982350532]
This paper reviews the Challenge on Video Saliency Prediction at AIM 2024.
The goal of the participants was to develop a method for predicting accurate saliency maps for the provided set of video sequences.
arXiv Detail & Related papers (2024-09-23T08:59:22Z) - AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and Results [120.95863275142727]
This paper presents the results of the Compressed Video Quality Assessment challenge, held in conjunction with the Advances in Image Manipulation (AIM) workshop at ECCV 2024.
The challenge aimed to evaluate the performance of VQA methods on a diverse dataset of 459 videos encoded with 14 codecs of various compression standards.
arXiv Detail & Related papers (2024-08-21T20:32:45Z) - NTIRE 2024 Quality Assessment of AI-Generated Content Challenge [141.37864527005226]
The challenge is divided into the image track and the video track.
The winning methods in both tracks have demonstrated superior prediction performance on AIGC.
arXiv Detail & Related papers (2024-04-25T15:36:18Z) - AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and Results [140.47245070508353]
This paper reviews the AIS 2024 Video Quality Assessment (VQA) Challenge, focused on User-Generated Content (UGC)
The aim of this challenge is to gather deep learning-based methods capable of estimating perceptual quality of videos.
The user-generated videos from the YouTube dataset include diverse content (sports, games, lyrics, anime, etc.), quality and resolutions.
arXiv Detail & Related papers (2024-04-24T21:02:14Z) - NTIRE 2024 Challenge on Short-form UGC Video Quality Assessment: Methods and Results [216.73187673659675]
This paper reviews the NTIRE 2024 Challenge on Shortform Video Quality Assessment (S-UGC VQA)
The KVQ database is divided into three parts, including 2926 videos for training, 420 videos for validation, and 854 videos for testing.
The purpose is to build new benchmarks and advance the development of S-UGC VQA.
arXiv Detail & Related papers (2024-04-17T12:26:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.