GTA-HDR: A Large-Scale Synthetic Dataset for HDR Image Reconstruction
- URL: http://arxiv.org/abs/2403.17837v1
- Date: Tue, 26 Mar 2024 16:24:42 GMT
- Title: GTA-HDR: A Large-Scale Synthetic Dataset for HDR Image Reconstruction
- Authors: Hrishav Bakul Barua, Kalin Stefanov, KokSheik Wong, Abhinav Dhall, Ganesh Krishnasamy,
- Abstract summary: High Dynamic Range (i.e., images and videos) has a broad range of applications.
High Dynamic Range (i.e., images and videos) has a broad range of applications.
The challenging task of reconstructing visually accurate HDR images from their Low Dynamic Range (LDR) counterparts is gaining attention in the vision research community.
- Score: 11.610543327501995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High Dynamic Range (HDR) content (i.e., images and videos) has a broad range of applications. However, capturing HDR content from real-world scenes is expensive and time- consuming. Therefore, the challenging task of reconstructing visually accurate HDR images from their Low Dynamic Range (LDR) counterparts is gaining attention in the vision research community. A major challenge in this research problem is the lack of datasets, which capture diverse scene conditions (e.g., lighting, shadows, weather, locations, landscapes, objects, humans, buildings) and various image features (e.g., color, contrast, saturation, hue, luminance, brightness, radiance). To address this gap, in this paper, we introduce GTA-HDR, a large-scale synthetic dataset of photo-realistic HDR images sampled from the GTA-V video game. We perform thorough evaluation of the proposed dataset, which demonstrates significant qualitative and quantitative improvements of the state-of-the-art HDR image reconstruction methods. Furthermore, we demonstrate the effectiveness of the proposed dataset and its impact on additional computer vision tasks including 3D human pose estimation, human body part segmentation, and holistic scene segmentation. The dataset, data collection pipeline, and evaluation code are available at: https://github.com/HrishavBakulBarua/GTA-HDR.
Related papers
- HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Towards Real-World HDR Video Reconstruction: A Large-Scale Benchmark Dataset and A Two-Stage Alignment Network [16.39592423564326]
Existing methods are mostly trained on synthetic datasets, which perform poorly in real scenes.
We present Real-V, a large-scale real-world benchmark dataset for HDR video reconstruction.
arXiv Detail & Related papers (2024-04-30T23:29:26Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - RawHDR: High Dynamic Range Image Reconstruction from a Single Raw Image [36.17182977927645]
High dynamic range (RGB) images capture much more intensity levels than standard ones.
Current methods predominantly generate HDR images from 8-bit low dynamic range (LDR) s images that have been degraded by the camera processing pipeline.
Unlike existing methods, the core idea of this work is to incorporate more informative Raw sensor data to generate HDR images.
arXiv Detail & Related papers (2023-09-05T07:58:21Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - HDR-cGAN: Single LDR to HDR Image Translation using Conditional GAN [24.299931323012757]
Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene.
We propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image.
We present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets.
arXiv Detail & Related papers (2021-10-04T18:50:35Z) - A Two-stage Deep Network for High Dynamic Range Image Reconstruction [0.883717274344425]
This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
arXiv Detail & Related papers (2021-04-19T15:19:17Z) - HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world
Benchmark Dataset [30.249052175655606]
We introduce a coarse-to-fine deep learning framework for HDR video reconstruction.
Firstly, we perform coarse alignment and pixel blending in the image space to estimate the coarse HDR video.
Secondly, we conduct more sophisticated alignment and temporal fusion in the feature space of the coarse HDR video to produce better reconstruction.
arXiv Detail & Related papers (2021-03-27T16:40:05Z) - Beyond Visual Attractiveness: Physically Plausible Single Image HDR
Reconstruction for Spherical Panoramas [60.24132321381606]
We introduce the physical illuminance constraints to our single-shot HDR reconstruction framework.
Our method can generate HDRs which are not only visually appealing but also physically plausible.
arXiv Detail & Related papers (2021-03-24T01:51:19Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.