HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos
- URL: http://arxiv.org/abs/2304.13156v1
- Date: Tue, 25 Apr 2023 21:25:02 GMT
- Title: HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos
- Authors: Joshua P. Ebenezer, Zaixi Shang, Yongjun Wu, Hai Wei, Sriram
Sethuraman and Alan C. Bovik
- Abstract summary: We present a no-reference video quality model and algorithm that delivers standout performance for High Dynamic Range (Chip) videos.
HDR videos represent wider ranges of luminances, details, and colors than Standard Dynamic Range (SDR) videos.
- Score: 38.504568225201915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a no-reference video quality model and algorithm that delivers
standout performance for High Dynamic Range (HDR) videos, which we call
HDR-ChipQA. HDR videos represent wider ranges of luminances, details, and
colors than Standard Dynamic Range (SDR) videos. The growing adoption of HDR in
massively scaled video networks has driven the need for video quality
assessment (VQA) algorithms that better account for distortions on HDR content.
In particular, standard VQA models may fail to capture conspicuous distortions
at the extreme ends of the dynamic range, because the features that drive them
may be dominated by distortions {that pervade the mid-ranges of the signal}. We
introduce a new approach whereby a local expansive nonlinearity emphasizes
distortions occurring at the higher and lower ends of the {local} luma range,
allowing for the definition of additional quality-aware features that are
computed along a separate path. These features are not HDR-specific, and also
improve VQA on SDR video contents, albeit to a reduced degree. We show that
this preprocessing step significantly boosts the power of distortion-sensitive
natural video statistics (NVS) features when used to predict the quality of HDR
content. In similar manner, we separately compute novel wide-gamut color
features using the same nonlinear processing steps. We have found that our
model significantly outperforms SDR VQA algorithms on the only publicly
available, comprehensive HDR database, while also attaining state-of-the-art
performance on SDR content.
Related papers
- HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Adapting Pretrained Networks for Image Quality Assessment on High Dynamic Range Displays [0.0]
Conventional image quality metrics (IQMs) are designed for perceptually uniform gamma-encoded pixel values.
Most of the available datasets consist of standard-dynamic-range (SDR) images collected in standard and possibly uncontrolled viewing conditions.
Popular pre-trained neural networks are likewise intended for SDR inputs, restricting their direct application to HDR content.
In this work, we explore more effective approaches for training deep learning-based models for image quality assessment (IQA) on HDR data.
arXiv Detail & Related papers (2024-05-01T17:57:12Z) - A FUNQUE Approach to the Quality Assessment of Compressed HDR Videos [36.26141980831573]
State-of-the-art (SOTA) approach HDRMAX involves augmenting off-the-shelf video quality models, such as VMAF, with features computed on non-linearly transformed video frames.
Here, we show that an efficient class of video quality prediction models named FUNQUE+ achieves higher HDR video quality prediction accuracy at lower computational cost.
arXiv Detail & Related papers (2023-12-13T21:24:00Z) - HIDRO-VQA: High Dynamic Range Oracle for Video Quality Assessment [36.1179702443845]
We introduce HIDRO-VQA, a no-reference (NR) video quality assessment model designed to provide precise quality evaluations of High Dynamic Range () videos.
Our findings demonstrate that self-supervised pre-trained neural networks can be further fine-tuned in a self-supervised setting to achieve state-of-the-art performance.
Our algorithm can be extended to the Full Reference VQA setting, also achieving state-of-the-art performance.
arXiv Detail & Related papers (2023-11-18T12:33:19Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Towards Efficient SDRTV-to-HDRTV by Learning from Image Formation [51.26219245226384]
Modern displays are capable of rendering video content with high dynamic range (WCG) and wide color gamut (SDR)
The majority of available resources are still in standard dynamic range (SDR)
We define and analyze the SDRTV-to-TV task by modeling the formation of SDRTV/TV content.
Our method is primarily designed for ultra-high-definition TV content and is therefore effective and lightweight for processing 4K resolution images.
arXiv Detail & Related papers (2023-09-08T02:50:54Z) - HDR or SDR? A Subjective and Objective Study of Scaled and Compressed
Videos [36.33823452846196]
We conducted a large-scale study of human perceptual quality judgments of High Dynamic Range (SDR) and Standard Dynamic Range (SDR) videos.
We found subject preference of HDR versus SDR depends heavily on the display device, as well as on resolution scaling and resolution.
arXiv Detail & Related papers (2023-04-25T21:43:37Z) - Making Video Quality Assessment Models Robust to Bit Depth [38.504568225201915]
We introduce a novel feature set, which we call HDRMAX features, that when included into Video Quality Assessment (VQA) algorithms, sensitizes them to distortions of High Dynamic Range (SDR) videos.
While these features are not specific to HDR, and also augment the equality prediction performances of VQA models on SDR content, they are especially effective on HDR.
As a demonstration of the efficacy of our approach, we show that, while current state-of-the-art VQA models perform poorly on 10-bit HDR databases, their performances are greatly improved by the inclusion of HDRMAX features when tested on
arXiv Detail & Related papers (2023-04-25T18:54:28Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.