UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion
- URL: http://arxiv.org/abs/2501.11515v4
- Date: Wed, 23 Apr 2025 11:55:41 GMT
- Title: UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion
- Authors: Zixuan Chen, Yujin Wang, Xin Cai, Zhiyuan You, Zheming Lu, Fan Zhang, Shi Guo, Tianfan Xue,
- Abstract summary: Capturing high dynamic range scenes is one of the most important issues in camera design.<n>We propose model, the first exposure fusion technique that can merge inputs with 9 stops differences.<n>Our approach outperforms HDR-Transformer on latest HDR benchmarks.
- Score: 16.915597001287964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capturing high dynamic range (HDR) scenes is one of the most important issues in camera design. Majority of cameras use exposure fusion, which fuses images captured by different exposure levels, to increase dynamic range. However, this approach can only handle images with limited exposure difference, normally 3-4 stops. When applying to very high dynamic range scenes where a large exposure difference is required, this approach often fails due to incorrect alignment or inconsistent lighting between inputs, or tone mapping artifacts. In this work, we propose \model, the first exposure fusion technique that can merge inputs with 9 stops differences. The key idea is that we model exposure fusion as a guided inpainting problem, where the under-exposed image is used as a guidance to fill the missing information of over-exposed highlights in the over-exposed region. Using an under-exposed image as a soft guidance, instead of a hard constraint, our model is robust to potential alignment issue or lighting variations. Moreover, by utilizing the image prior of the generative model, our model also generates natural tone mapping, even for very high-dynamic range scenes. Our approach outperforms HDR-Transformer on latest HDR benchmarks. Moreover, to test its performance in ultra high dynamic range scenes, we capture a new real-world exposure fusion benchmark, UltraFusion dataset, with exposure differences up to 9 stops, and experiments show that UltraFusion can generate beautiful and high-quality fusion results under various scenarios. Code and data will be available at https://openimaginglab.github.io/UltraFusion.
Related papers
- CasualHDRSplat: Robust High Dynamic Range 3D Gaussian Splatting from Casually Captured Videos [15.52886867095313]
Photo-realistic novel view rendering from multi-view images, such as neural radiance field (NeRF) and 3D Splatting (3DGS), have garnered widespread attention due to their superior performance.
textbfSplat contains a unified differentiable physical imaging model which applies continuous-time trajectory constraint to imaging process.
Experiments demonstrate that our approach outperforms existing methods in terms of robustness and quality.
arXiv Detail & Related papers (2025-04-24T16:42:37Z) - Event-assisted 12-stop HDR Imaging of Dynamic Scene [20.064191181938533]
We propose a novel 12-stop HDR imaging approach for dynamic scenes, leveraging a dual-camera system with an event camera and an RGB camera.<n>The event camera provides temporally dense, high dynamic range signals that improve alignment between LDR frames with large exposure differences, reducing ghosting artifacts caused by motion.<n>Our method achieves state-of-the-art performance, successfully extending HDR imaging to 12 stops in dynamic scenes.
arXiv Detail & Related papers (2024-12-19T10:17:50Z) - Exposure Bracketing Is All You Need For A High-Quality Image [50.822601495422916]
Multi-exposure images are complementary in denoising, deblurring, high dynamic range imaging, and super-resolution.
We propose to utilize exposure bracketing photography to get a high-quality image by combining these tasks in this work.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection [90.35085487641773]
Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs, frequently leading to local overexposure issues.
We present a learning-based system to reduce these artifacts without resorting to complex processing mechanisms.
arXiv Detail & Related papers (2023-08-29T17:40:57Z) - HDR Imaging with Spatially Varying Signal-to-Noise Ratios [15.525314212209564]
For low-light HDR imaging, the noise within one exposure is spatially varying.
Existing image denoising algorithms and HDR fusion algorithms both fail to handle this situation.
We propose a new method called the spatially varying high dynamic range (SV-) fusion network to simultaneously denoise and fuse images.
arXiv Detail & Related papers (2023-03-30T09:32:29Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - Perceptual Multi-Exposure Fusion [0.5076419064097732]
This paper presents a perceptual multi-exposure fusion method that ensures fine shadow/highlight details but with lower complexity than detailenhanced methods.
We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences.
Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value.
arXiv Detail & Related papers (2022-10-18T05:34:58Z) - Variational Approach for Intensity Domain Multi-exposure Image Fusion [11.678822620192435]
We present a method to produce well-exposed fused image that can be displayed directly on conventional display devices.
The ambition is to preserve details in poorly illuminated and brightly illuminated regions.
arXiv Detail & Related papers (2022-07-09T06:31:34Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Learning Multi-Scale Photo Exposure Correction [51.57836446833474]
Capturing photographs with wrong exposures remains a major source of errors in camera-based imaging.
We propose a coarse-to-fine deep neural network (DNN) model, trainable in an end-to-end manner, that addresses each sub-problem separately.
Our method achieves results on par with existing state-of-the-art methods on underexposed images.
arXiv Detail & Related papers (2020-03-25T19:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.