Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and
Degradation Models
- URL: http://arxiv.org/abs/2303.13031v1
- Date: Thu, 23 Mar 2023 04:40:33 GMT
- Title: Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and
Degradation Models
- Authors: Cheng Guo and Leidong Fan and Ziyu Xue and and Xiuhua Jiang
- Abstract summary: In media industry, the demand of SDR-to-TV up-conversion arises when users possess HDR-WCG (high dynamic range-wide color gamut)
Current methods tend to produce dim and desaturated result, making nearly no improvement on viewing experience.
We propose new HDRTV dataset (dubbed HDRTV4K) and new HDR-to-SDR models.
- Score: 4.0336006284433665
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In media industry, the demand of SDR-to-HDRTV up-conversion arises when users
possess HDR-WCG (high dynamic range-wide color gamut) TVs while most
off-the-shelf footage is still in SDR (standard dynamic range). The research
community has started tackling this low-level vision task by learning-based
approaches. When applied to real SDR, yet, current methods tend to produce dim
and desaturated result, making nearly no improvement on viewing experience.
Different from other network-oriented methods, we attribute such deficiency to
training set (HDR-SDR pair). Consequently, we propose new HDRTV dataset (dubbed
HDRTV4K) and new HDR-to-SDR degradation models. Then, it's used to train a
luminance-segmented network (LSN) consisting of a global mapping trunk, and two
Transformer branches on bright and dark luminance range. We also update
assessment criteria by tailored metrics and subjective experiment. Finally,
ablation studies are conducted to prove the effectiveness. Our work is
available at: https://github.com/AndreGuo/HDRTVDM.
Related papers
- Beyond Feature Mapping GAP: Integrating Real HDRTV Priors for Superior SDRTV-to-HDRTV Conversion [22.78096367667505]
The rise of HDR-WCG display devices has highlighted the need to convert SDRTV to HDRTV.
Existing methods primarily focus on designing neural networks to learn a single-style mapping from SDRTV to HDRTV.
We propose a novel method for SDRTV to HDRTV conversion guided by real HDRTV priors.
arXiv Detail & Related papers (2024-11-16T11:20:29Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - FastHDRNet: A new efficient method for SDR-to-HDR Translation [5.224011800476952]
We propose a neural network for SDR to HDR conversion, termed "FastNet"
The architecture is designed as a lightweight network that utilizes global statistics and local information with super high efficiency.
arXiv Detail & Related papers (2024-04-06T03:25:24Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Towards Efficient SDRTV-to-HDRTV by Learning from Image Formation [51.26219245226384]
Modern displays are capable of rendering video content with high dynamic range (WCG) and wide color gamut (SDR)
The majority of available resources are still in standard dynamic range (SDR)
We define and analyze the SDRTV-to-TV task by modeling the formation of SDRTV/TV content.
Our method is primarily designed for ultra-high-definition TV content and is therefore effective and lightweight for processing 4K resolution images.
arXiv Detail & Related papers (2023-09-08T02:50:54Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - HDR-NeRF: High Dynamic Range Neural Radiance Fields [70.80920996881113]
We present High Dynamic Range Neural Radiance Fields (-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures.
We are able to generate both novel HDR views and novel LDR views under different exposures.
arXiv Detail & Related papers (2021-11-29T11:06:39Z) - A New Journey from SDRTV to HDRTV [36.58487005995048]
We conduct an analysis of SDRTV-to-TV task by modeling the formation of SDRTV/TV content.
We present a lightweight network that utilizes global statistics as guidance to conduct image-adaptive color mapping.
arXiv Detail & Related papers (2021-08-18T05:17:08Z) - A Two-stage Deep Network for High Dynamic Range Image Reconstruction [0.883717274344425]
This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
arXiv Detail & Related papers (2021-04-19T15:19:17Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.