HDR Video Reconstruction with Tri-Exposure Quad-Bayer Sensors
- URL: http://arxiv.org/abs/2103.10982v1
- Date: Fri, 19 Mar 2021 18:40:09 GMT
- Title: HDR Video Reconstruction with Tri-Exposure Quad-Bayer Sensors
- Authors: Yitong Jiang, Inchang Choi, Jun Jiang, Jinwei Gu
- Abstract summary: We propose a novel high dynamic range (construction) video reconstruction method with new tri-exposure quad-bayer sensors.
Thanks to the larger number of exposure sets and their spatially uniform deployment over a frame, they are more robust to noise and spatial artifacts than previous spatially varying exposure (SVE) HDR video methods.
We show that the tri-exposure quad-bayer with our solution is more optimal to capture than previous reconstruction methods.
- Score: 14.844162451328321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel high dynamic range (HDR) video reconstruction method with
new tri-exposure quad-bayer sensors. Thanks to the larger number of exposure
sets and their spatially uniform deployment over a frame, they are more robust
to noise and spatial artifacts than previous spatially varying exposure (SVE)
HDR video methods. Nonetheless, the motion blur from longer exposures, the
noise from short exposures, and inherent spatial artifacts of the SVE methods
remain huge obstacles. Additionally, temporal coherence must be taken into
account for the stability of video reconstruction. To tackle these challenges,
we introduce a novel network architecture that divides-and-conquers these
problems. In order to better adapt the network to the large dynamic range, we
also propose LDR-reconstruction loss that takes equal contributions from both
the highlighted and the shaded pixels of HDR frames. Through a series of
comparisons and ablation studies, we show that the tri-exposure quad-bayer with
our solution is more optimal to capture than previous reconstruction methods,
particularly for the scenes with larger dynamic range and objects with motion.
Related papers
- Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering [17.430726543786943]
We propose a novel paradigm to render HDR frames via completing the absent exposure information.
Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures.
This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency.
arXiv Detail & Related papers (2024-07-18T09:13:08Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection [90.35085487641773]
Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs, frequently leading to local overexposure issues.
We present a learning-based system to reduce these artifacts without resorting to complex processing mechanisms.
arXiv Detail & Related papers (2023-08-29T17:40:57Z) - R3D3: Dense 3D Reconstruction of Dynamic Scenes from Multiple Cameras [106.52409577316389]
R3D3 is a multi-camera system for dense 3D reconstruction and ego-motion estimation.
Our approach exploits spatial-temporal information from multiple cameras, and monocular depth refinement.
We show that this design enables a dense, consistent 3D reconstruction of challenging, dynamic outdoor environments.
arXiv Detail & Related papers (2023-08-28T17:13:49Z) - HDR Reconstruction from Bracketed Exposures and Events [12.565039752529797]
Reconstruction of high-quality HDR images is at the core of modern computational photography.
We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event in the feature domain.
Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window.
arXiv Detail & Related papers (2022-03-28T15:04:41Z) - HDRVideo-GAN: Deep Generative HDR Video Reconstruction [19.837271879354184]
We propose an end-to-end GAN-based framework for HDR video reconstruction from LDR sequences with alternating exposures.
We first extract clean LDR frames from noisy LDR video with alternating exposures with a denoising network trained in a self-supervised setting.
We then align the neighboring alternating-exposure frames to a reference frame and then reconstruct high-quality HDR frames in a complete adversarial setting.
arXiv Detail & Related papers (2021-10-22T14:02:03Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - Real-time Non-line-of-Sight imaging of dynamic scenes [11.199289771176238]
Non-Line-of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight.
In the past, this method has suffered from the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality.
We show that SPAD (Single-Photon Avalanche Diode) array detectors with a total of just 28 pixels combined with a specifically extended Phasor Field reconstruction algorithm can reconstruct live real-time videos of non-retro-reflective NLOS scenes.
arXiv Detail & Related papers (2020-10-24T01:40:06Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.