S2R-HDR: A Large-Scale Rendered Dataset for HDR Fusion
- URL: http://arxiv.org/abs/2504.07667v1
- Date: Thu, 10 Apr 2025 11:39:56 GMT
- Title: S2R-HDR: A Large-Scale Rendered Dataset for HDR Fusion
- Authors: Yujin Wang, Jiarui Wu, Yichen Bian, Fan Zhang, Tianfan Xue,
- Abstract summary: S2R- is the first large-scale high-quality synthetic dataset for HDR fusion, with 24,000 HDR samples.<n>We design a diverse set of realistic HDR scenes that encompass various dynamic elements, motion types, high dynamic range scenes, and lighting.<n>We introduce S2R-Adapter, a domain adaptation designed to bridge the gap between synthetic and real-world data.
- Score: 4.684215759472536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generalization of learning-based high dynamic range (HDR) fusion is often limited by the availability of training data, as collecting large-scale HDR images from dynamic scenes is both costly and technically challenging. To address these challenges, we propose S2R-HDR, the first large-scale high-quality synthetic dataset for HDR fusion, with 24,000 HDR samples. Using Unreal Engine 5, we design a diverse set of realistic HDR scenes that encompass various dynamic elements, motion types, high dynamic range scenes, and lighting. Additionally, we develop an efficient rendering pipeline to generate realistic HDR images. To further mitigate the domain gap between synthetic and real-world data, we introduce S2R-Adapter, a domain adaptation designed to bridge this gap and enhance the generalization ability of models. Experimental results on real-world datasets demonstrate that our approach achieves state-of-the-art HDR reconstruction performance. Dataset and code will be available at https://openimaginglab.github.io/S2R-HDR.
Related papers
- LEDiff: Latent Exposure Diffusion for HDR Generation [11.669442066168244]
LEDiff is a method that enables a generative model with HDR content generation through latent space exposure fusion techniques.<n>It also functions as an LDR-to- fusion converter, expanding the dynamic range of existing low-dynamic range images.
arXiv Detail & Related papers (2024-12-19T02:15:55Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Towards Real-World HDR Video Reconstruction: A Large-Scale Benchmark Dataset and A Two-Stage Alignment Network [16.39592423564326]
Existing methods are mostly trained on synthetic datasets, which perform poorly in real scenes.
We present Real-V, a large-scale real-world benchmark dataset for HDR video reconstruction.
arXiv Detail & Related papers (2024-04-30T23:29:26Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - RawHDR: High Dynamic Range Image Reconstruction from a Single Raw Image [36.17182977927645]
High dynamic range (RGB) images capture much more intensity levels than standard ones.
Current methods predominantly generate HDR images from 8-bit low dynamic range (LDR) s images that have been degraded by the camera processing pipeline.
Unlike existing methods, the core idea of this work is to incorporate more informative Raw sensor data to generate HDR images.
arXiv Detail & Related papers (2023-09-05T07:58:21Z) - Efficient HDR Reconstruction from Real-World Raw Images [16.54071503000866]
High-definition screens on edge devices stimulate a strong demand for efficient high dynamic range ( HDR) algorithms.
Many existing HDR methods either deliver unsatisfactory results or consume too much computational and memory resources.
In this work, we discover an excellent opportunity for HDR reconstructing directly from raw images and investigating novel neural network structures.
arXiv Detail & Related papers (2023-06-17T10:10:15Z) - HDR Video Reconstruction with a Large Dynamic Dataset in Raw and sRGB
Domains [23.309488653045026]
High dynamic range ( HDR) video reconstruction is attracting more and more attention due to the superior visual quality compared with those of low dynamic range (LDR) videos.
There are still no real LDR- pairs for dynamic scenes due to the difficulty in capturing LDR- frames simultaneously.
In this work, we propose to utilize a staggered sensor to capture two alternate exposure images simultaneously, which are then fused into an HDR frame in both raw and sRGB domains.
arXiv Detail & Related papers (2023-04-10T11:59:03Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - NTIRE 2021 Challenge on High Dynamic Range Imaging: Dataset, Methods and
Results [56.932867490888015]
This paper reviews the first challenge on high-dynamic range imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021.
The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise.
arXiv Detail & Related papers (2021-06-02T19:45:16Z) - A Two-stage Deep Network for High Dynamic Range Image Reconstruction [0.883717274344425]
This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
arXiv Detail & Related papers (2021-04-19T15:19:17Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.