Adapting Pretrained Networks for Image Quality Assessment on High Dynamic Range Displays
- URL: http://arxiv.org/abs/2405.00670v1
- Date: Wed, 1 May 2024 17:57:12 GMT
- Title: Adapting Pretrained Networks for Image Quality Assessment on High Dynamic Range Displays
- Authors: Andrei Chubarau, Hyunjin Yoo, Tara Akhavan, James Clark,
- Abstract summary: Conventional image quality metrics (IQMs) are designed for perceptually uniform gamma-encoded pixel values.
Most of the available datasets consist of standard-dynamic-range (SDR) images collected in standard and possibly uncontrolled viewing conditions.
Popular pre-trained neural networks are likewise intended for SDR inputs, restricting their direct application to HDR content.
In this work, we explore more effective approaches for training deep learning-based models for image quality assessment (IQA) on HDR data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional image quality metrics (IQMs), such as PSNR and SSIM, are designed for perceptually uniform gamma-encoded pixel values and cannot be directly applied to perceptually non-uniform linear high-dynamic-range (HDR) colors. Similarly, most of the available datasets consist of standard-dynamic-range (SDR) images collected in standard and possibly uncontrolled viewing conditions. Popular pre-trained neural networks are likewise intended for SDR inputs, restricting their direct application to HDR content. On the other hand, training HDR models from scratch is challenging due to limited available HDR data. In this work, we explore more effective approaches for training deep learning-based models for image quality assessment (IQA) on HDR data. We leverage networks pre-trained on SDR data (source domain) and re-target these models to HDR (target domain) with additional fine-tuning and domain adaptation. We validate our methods on the available HDR IQA datasets, demonstrating that models trained with our combined recipe outperform previous baselines, converge much quicker, and reliably generalize to HDR inputs.
Related papers
- HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - HistoHDR-Net: Histogram Equalization for Single LDR to HDR Image
Translation [12.45632443397018]
High Dynamic Range ( HDR) imaging aims to replicate the high visual quality and clarity of real-world scenes.
The literature offers various data-driven methods for HDR image reconstruction from Low Dynamic Range (LDR) counterparts.
A common limitation of these approaches is missing details in regions of the reconstructed HDR images.
We propose a simple and effective method, Histo-Net, to recover the fine details.
arXiv Detail & Related papers (2024-02-08T20:14:46Z) - HIDRO-VQA: High Dynamic Range Oracle for Video Quality Assessment [36.1179702443845]
We introduce HIDRO-VQA, a no-reference (NR) video quality assessment model designed to provide precise quality evaluations of High Dynamic Range () videos.
Our findings demonstrate that self-supervised pre-trained neural networks can be further fine-tuned in a self-supervised setting to achieve state-of-the-art performance.
Our algorithm can be extended to the Full Reference VQA setting, also achieving state-of-the-art performance.
arXiv Detail & Related papers (2023-11-18T12:33:19Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Perceptual Assessment and Optimization of HDR Image Rendering [25.72195917050074]
High dynamic range rendering has the ability to faithfully reproduce the wide luminance ranges in natural scenes.
Existing quality models are mostly designed for low dynamic range (LDR) images, and do not align well with human perception of HDR image quality.
We propose a family of HDR quality metrics, in which the key step is employing a simple inverse display model to decompose an HDR image into a stack of LDR images with varying exposures.
arXiv Detail & Related papers (2023-10-19T16:32:18Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.