Making Video Quality Assessment Models Robust to Bit Depth
- URL: http://arxiv.org/abs/2304.13092v1
- Date: Tue, 25 Apr 2023 18:54:28 GMT
- Title: Making Video Quality Assessment Models Robust to Bit Depth
- Authors: Joshua P. Ebenezer, Zaixi Shang, Yongjun Wu, Hai Wei, Sriram
Sethuraman and Alan C. Bovik
- Abstract summary: We introduce a novel feature set, which we call HDRMAX features, that when included into Video Quality Assessment (VQA) algorithms, sensitizes them to distortions of High Dynamic Range (SDR) videos.
While these features are not specific to HDR, and also augment the equality prediction performances of VQA models on SDR content, they are especially effective on HDR.
As a demonstration of the efficacy of our approach, we show that, while current state-of-the-art VQA models perform poorly on 10-bit HDR databases, their performances are greatly improved by the inclusion of HDRMAX features when tested on
- Score: 38.504568225201915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel feature set, which we call HDRMAX features, that when
included into Video Quality Assessment (VQA) algorithms designed for Standard
Dynamic Range (SDR) videos, sensitizes them to distortions of High Dynamic
Range (HDR) videos that are inadequately accounted for by these algorithms.
While these features are not specific to HDR, and also augment the equality
prediction performances of VQA models on SDR content, they are especially
effective on HDR. HDRMAX features modify powerful priors drawn from Natural
Video Statistics (NVS) models by enhancing their measurability where they
visually impact the brightest and darkest local portions of videos, thereby
capturing distortions that are often poorly accounted for by existing VQA
models. As a demonstration of the efficacy of our approach, we show that, while
current state-of-the-art VQA models perform poorly on 10-bit HDR databases,
their performances are greatly improved by the inclusion of HDRMAX features
when tested on HDR and 10-bit distorted videos.
Related papers
- HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - A FUNQUE Approach to the Quality Assessment of Compressed HDR Videos [36.26141980831573]
State-of-the-art (SOTA) approach HDRMAX involves augmenting off-the-shelf video quality models, such as VMAF, with features computed on non-linearly transformed video frames.
Here, we show that an efficient class of video quality prediction models named FUNQUE+ achieves higher HDR video quality prediction accuracy at lower computational cost.
arXiv Detail & Related papers (2023-12-13T21:24:00Z) - HIDRO-VQA: High Dynamic Range Oracle for Video Quality Assessment [36.1179702443845]
We introduce HIDRO-VQA, a no-reference (NR) video quality assessment model designed to provide precise quality evaluations of High Dynamic Range () videos.
Our findings demonstrate that self-supervised pre-trained neural networks can be further fine-tuned in a self-supervised setting to achieve state-of-the-art performance.
Our algorithm can be extended to the Full Reference VQA setting, also achieving state-of-the-art performance.
arXiv Detail & Related papers (2023-11-18T12:33:19Z) - Towards Efficient SDRTV-to-HDRTV by Learning from Image Formation [51.26219245226384]
Modern displays are capable of rendering video content with high dynamic range (WCG) and wide color gamut (SDR)
The majority of available resources are still in standard dynamic range (SDR)
We define and analyze the SDRTV-to-TV task by modeling the formation of SDRTV/TV content.
Our method is primarily designed for ultra-high-definition TV content and is therefore effective and lightweight for processing 4K resolution images.
arXiv Detail & Related papers (2023-09-08T02:50:54Z) - HDR or SDR? A Subjective and Objective Study of Scaled and Compressed
Videos [36.33823452846196]
We conducted a large-scale study of human perceptual quality judgments of High Dynamic Range (SDR) and Standard Dynamic Range (SDR) videos.
We found subject preference of HDR versus SDR depends heavily on the display device, as well as on resolution scaling and resolution.
arXiv Detail & Related papers (2023-04-25T21:43:37Z) - HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos [38.504568225201915]
We present a no-reference video quality model and algorithm that delivers standout performance for High Dynamic Range (Chip) videos.
HDR videos represent wider ranges of luminances, details, and colors than Standard Dynamic Range (SDR) videos.
arXiv Detail & Related papers (2023-04-25T21:25:02Z) - Subjective Assessment of High Dynamic Range Videos Under Different
Ambient Conditions [38.504568225201915]
We present the first publicly released large-scale subjective study of HDR videos.
We study the effect of distortions such as compression and aliasing on the quality of HDR videos.
A total of 66 subjects participated in the study and more than 20,000 opinion scores were collected.
arXiv Detail & Related papers (2022-09-20T21:25:50Z) - High Dynamic Range Image Quality Assessment Based on Frequency Disparity [78.36555631446448]
An image quality assessment (IQA) algorithm based on frequency disparity for high dynamic range ( HDR) images is proposed.
The proposed LGFM can provide a higher consistency with the subjective perception compared with the state-of-the-art HDR IQA methods.
arXiv Detail & Related papers (2022-09-06T08:22:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.