Single-shot HDR using conventional image sensor shutter functions and optical randomization
- URL: http://arxiv.org/abs/2506.22426v1
- Date: Fri, 27 Jun 2025 17:48:21 GMT
- Title: Single-shot HDR using conventional image sensor shutter functions and optical randomization
- Authors: Xiang Dai, Kyrollos Yanny, Kristina Monakhova, Nicholas Antipa,
- Abstract summary: Single-shot HDR imaging alleviates the issue by encoding HDR data into a single exposure, then computationally recovering it.<n>We utilize the global reset release (GRR) shutter mode of an off-the-shelf sensor.<n>Our prototype achieves a dynamic range of up to 73dB using an 8-bit sensor with 48dB dynamic range.
- Score: 5.5476517032395645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-dynamic-range (HDR) imaging is an essential technique for overcoming the dynamic range limits of image sensors. The classic method relies on multiple exposures, which slows capture time, resulting in motion artifacts when imaging dynamic scenes. Single-shot HDR imaging alleviates this issue by encoding HDR data into a single exposure, then computationally recovering it. Many established methods use strong image priors to recover improperly exposed image detail. These approaches struggle with extended highlight regions. We utilize the global reset release (GRR) shutter mode of an off-the-shelf sensor. GRR shutter mode applies a longer exposure time to rows closer to the bottom of the sensor. We use optics that relay a randomly permuted (shuffled) image onto the sensor, effectively creating spatially randomized exposures across the scene. The exposure diversity allows us to recover HDR data by solving an optimization problem with a simple total variation image prior. In simulation, we demonstrate that our method outperforms other single-shot methods when many sensor pixels are saturated (10% or more), and is competitive at a modest saturation (1%). Finally, we demonstrate a physical lab prototype that uses an off-the-shelf random fiber bundle for the optical shuffling. The fiber bundle is coupled to a low-cost commercial sensor operating in GRR shutter mode. Our prototype achieves a dynamic range of up to 73dB using an 8-bit sensor with 48dB dynamic range.
Related papers
- HDRT: A Large-Scale Dataset for Infrared-Guided HDR Imaging [8.208995723545502]
We introduce the first comprehensive dataset that consists of HDR and thermal IR images.<n>The HDRT dataset comprises 50,000 images captured across three seasons over six months in eight cities.<n>We propose HDRTNet, a novel deep neural method that fuses IR and SDR content to generate HDR images.
arXiv Detail & Related papers (2024-06-08T13:43:44Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Snapshot High Dynamic Range Imaging with a Polarization Camera [0.6445605125467574]
This paper presents a straightforward but highly effective approach for turning an off-the-shelf polarization camera into a high-performance HDR camera.
We are able to simultaneously capture four images with varied exposures, which are determined by the orientation of the polarizer.
We develop an outlier-robust and self-calibrating algorithm to reconstruct an HDR image (at a single polarity) from these measurements.
arXiv Detail & Related papers (2023-08-16T02:04:34Z) - Robust estimation of exposure ratios in multi-exposure image stacks [12.449313419096821]
We propose to estimate exposure ratios directly from the input images.
We derive the exposure time estimation as an optimization problem, in which pixels are selected from pairs of exposures to minimize estimation error caused by camera noise.
We demonstrate that the estimation can be easily made robust to pixel misalignment caused by camera or object motion by collecting pixels from multiple spatial tiles.
arXiv Detail & Related papers (2023-08-05T23:42:59Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Deep Joint Demosaicing and High Dynamic Range Imaging within a Single
Shot [30.483754080108444]
It is challenging to restore a full-resolution HDR image from a real-world image with SVE.
A spatially varying convolution (SVC) is designed to process the Bayer images carried with varying exposures.
An exposure-guidance method is proposed against the interference from over- and under-exposed pixels.
arXiv Detail & Related papers (2021-11-14T08:54:26Z) - An Asynchronous Kalman Filter for Hybrid Event Cameras [13.600773150848543]
Event cameras are ideally suited to capture HDR visual information without blur.
conventional image sensors measure absolute intensity of slowly changing scenes effectively but do poorly on high dynamic range or quickly changing scenes.
We present an event-based video reconstruction pipeline for High Dynamic Range scenarios.
arXiv Detail & Related papers (2020-12-10T11:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.