CHUG: Crowdsourced User-Generated HDR Video Quality Dataset
- URL: http://arxiv.org/abs/2510.09879v1
- Date: Fri, 10 Oct 2025 21:35:39 GMT
- Title: CHUG: Crowdsourced User-Generated HDR Video Quality Dataset
- Authors: Shreshth Saini, Alan C. Bovik, Neil Birkbeck, Yilin Wang, Balu Adsumilli,
- Abstract summary: High Dynamic Range (UGC) videos enhance visual experiences with superior brightness, contrast, and color depth.<n>The surge of User-Generated Content (UGC) on platforms like YouTube and TikTok introduces unique challenges for HDR video quality assessment (VQA) due to diverse capture conditions, editing artifacts, and compression distortions.<n>Existing HDR-VQA datasets primarily focus on professionally generated content (PGC), leaving a gap in understanding real-world degradations.
- Score: 35.65322085280114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High Dynamic Range (HDR) videos enhance visual experiences with superior brightness, contrast, and color depth. The surge of User-Generated Content (UGC) on platforms like YouTube and TikTok introduces unique challenges for HDR video quality assessment (VQA) due to diverse capture conditions, editing artifacts, and compression distortions. Existing HDR-VQA datasets primarily focus on professionally generated content (PGC), leaving a gap in understanding real-world UGC-HDR degradations. To address this, we introduce CHUG: Crowdsourced User-Generated HDR Video Quality Dataset, the first large-scale subjective study on UGC-HDR quality. CHUG comprises 856 UGC-HDR source videos, transcoded across multiple resolutions and bitrates to simulate real-world scenarios, totaling 5,992 videos. A large-scale study via Amazon Mechanical Turk collected 211,848 perceptual ratings. CHUG provides a benchmark for analyzing UGC-specific distortions in HDR videos. We anticipate CHUG will advance No-Reference (NR) HDR-VQA research by offering a large-scale, diverse, and real-world UGC dataset. The dataset is publicly available at: https://shreshthsaini.github.io/CHUG/.
Related papers
- Seeing Beyond 8bits: Subjective and Objective Quality Assessment of HDR-UGC Videos [40.03485113183691]
High Dynamic Range (UGC) user-generated (UGC) videos are rapidly proliferating across social platforms.<n>Most perceptual video quality assessment (VQA) systems remain tailored to Standard Dynamic Range (SDR) models.<n>We introduce HDR-Q, the first Multimodal Large Language Model (MLLM) for HDR-UGC VQA.
arXiv Detail & Related papers (2026-03-01T06:02:40Z) - CompressedVQA-HDR: Generalized Full-reference and No-reference Quality Assessment Models for Compressed High Dynamic Range Videos [46.255654141741815]
We introduce CompressedVQA-SDR, an effective VQA framework designed to address the challenges of HDR video quality assessment.<n>We adopt the Swin Transformer and SigLip 2 as the backbone networks for the proposed full-reference (FR) and no-reference (NR) VQA models, respectively.<n>Our models achieve state-of-the-art performance compared to existing FR and NR VQA models.
arXiv Detail & Related papers (2025-07-16T04:33:06Z) - ICME 2025 Generalizable HDR and SDR Video Quality Measurement Grand Challenge [66.86693390673298]
The challenge was established to benchmark and promote VQA approaches capable of jointly handling HDR and SDR content.<n>The top-performing model achieved state-of-the-art performance, setting a new benchmark for generalizable video quality assessment.
arXiv Detail & Related papers (2025-06-28T07:14:23Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Subjective Quality Assessment of Compressed Tone-Mapped High Dynamic Range Videos [35.19716951217485]
We analyze the impact of tonemapping operators on the visual quality of streaming HDR videos.
We build the first large-scale subjectively open-source database of compressed tone-mapped HDR videos.
arXiv Detail & Related papers (2024-03-22T09:38:16Z) - A FUNQUE Approach to the Quality Assessment of Compressed HDR Videos [36.26141980831573]
State-of-the-art (SOTA) approach HDRMAX involves augmenting off-the-shelf video quality models, such as VMAF, with features computed on non-linearly transformed video frames.
Here, we show that an efficient class of video quality prediction models named FUNQUE+ achieves higher HDR video quality prediction accuracy at lower computational cost.
arXiv Detail & Related papers (2023-12-13T21:24:00Z) - HIDRO-VQA: High Dynamic Range Oracle for Video Quality Assessment [36.1179702443845]
We introduce HIDRO-VQA, a no-reference (NR) video quality assessment model designed to provide precise quality evaluations of High Dynamic Range () videos.
Our findings demonstrate that self-supervised pre-trained neural networks can be further fine-tuned in a self-supervised setting to achieve state-of-the-art performance.
Our algorithm can be extended to the Full Reference VQA setting, also achieving state-of-the-art performance.
arXiv Detail & Related papers (2023-11-18T12:33:19Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - Towards Efficient SDRTV-to-HDRTV by Learning from Image Formation [51.26219245226384]
Modern displays are capable of rendering video content with high dynamic range (WCG) and wide color gamut (SDR)
The majority of available resources are still in standard dynamic range (SDR)
We define and analyze the SDRTV-to-TV task by modeling the formation of SDRTV/TV content.
Our method is primarily designed for ultra-high-definition TV content and is therefore effective and lightweight for processing 4K resolution images.
arXiv Detail & Related papers (2023-09-08T02:50:54Z) - Subjective Assessment of High Dynamic Range Videos Under Different
Ambient Conditions [38.504568225201915]
We present the first publicly released large-scale subjective study of HDR videos.
We study the effect of distortions such as compression and aliasing on the quality of HDR videos.
A total of 66 subjects participated in the study and more than 20,000 opinion scores were collected.
arXiv Detail & Related papers (2022-09-20T21:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.