UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion
- URL: http://arxiv.org/abs/2501.11515v1
- Date: Mon, 20 Jan 2025 14:45:07 GMT
- Title: UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion
- Authors: Zixuan Chen, Yujin Wang, Xin Cai, Zhiyuan You, Zheming Lu, Fan Zhang, Shi Guo, Tianfan Xue,
- Abstract summary: We propose UltraFusion, the first exposure fusion technique that can merge input with 9 stops differences.
Our model is robust to potential alignment issue or lighting variations.
Our approach outperforms HDR-Transformer on latest HDR benchmarks.
- Score: 16.915597001287964
- License:
- Abstract: Capturing high dynamic range (HDR) scenes is one of the most important issues in camera design. Majority of cameras use exposure fusion technique, which fuses images captured by different exposure levels, to increase dynamic range. However, this approach can only handle images with limited exposure difference, normally 3-4 stops. When applying to very high dynamic scenes where a large exposure difference is required, this approach often fails due to incorrect alignment or inconsistent lighting between inputs, or tone mapping artifacts. In this work, we propose UltraFusion, the first exposure fusion technique that can merge input with 9 stops differences. The key idea is that we model the exposure fusion as a guided inpainting problem, where the under-exposed image is used as a guidance to fill the missing information of over-exposed highlight in the over-exposed region. Using under-exposed image as a soft guidance, instead of a hard constrain, our model is robust to potential alignment issue or lighting variations. Moreover, utilizing the image prior of the generative model, our model also generates natural tone mapping, even for very high-dynamic range scene. Our approach outperforms HDR-Transformer on latest HDR benchmarks. Moreover, to test its performance in ultra high dynamic range scene, we capture a new real-world exposure fusion benchmark, UltraFusion Dataset, with exposure difference up to 9 stops, and experiments show that \model~can generate beautiful and high-quality fusion results under various scenarios. An online demo is provided at https://openimaginglab.github.io/UltraFusion/.
Related papers
- Event-assisted 12-stop HDR Imaging of Dynamic Scene [20.064191181938533]
We propose a novel 12-stop HDR imaging approach for dynamic scenes, leveraging a dual-camera system with an event camera and an RGB camera.
The event camera provides temporally dense, high dynamic range signals that improve alignment between LDR frames with large exposure differences, reducing ghosting artifacts caused by motion.
Our method achieves state-of-the-art performance, successfully extending HDR imaging to 12 stops in dynamic scenes.
arXiv Detail & Related papers (2024-12-19T10:17:50Z) - Exposure Bracketing Is All You Need For A High-Quality Image [50.822601495422916]
Multi-exposure images are complementary in denoising, deblurring, high dynamic range imaging, and super-resolution.
We propose to utilize exposure bracketing photography to get a high-quality image by combining these tasks in this work.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection [90.35085487641773]
Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs, frequently leading to local overexposure issues.
We present a learning-based system to reduce these artifacts without resorting to complex processing mechanisms.
arXiv Detail & Related papers (2023-08-29T17:40:57Z) - HDR Imaging with Spatially Varying Signal-to-Noise Ratios [15.525314212209564]
For low-light HDR imaging, the noise within one exposure is spatially varying.
Existing image denoising algorithms and HDR fusion algorithms both fail to handle this situation.
We propose a new method called the spatially varying high dynamic range (SV-) fusion network to simultaneously denoise and fuse images.
arXiv Detail & Related papers (2023-03-30T09:32:29Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - Perceptual Multi-Exposure Fusion [0.5076419064097732]
This paper presents a perceptual multi-exposure fusion method that ensures fine shadow/highlight details but with lower complexity than detailenhanced methods.
We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences.
Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value.
arXiv Detail & Related papers (2022-10-18T05:34:58Z) - Deep Exposure Fusion with Deghosting via Homography Estimation and
Attention Learning [29.036754445277314]
We propose a deep network for exposure fusion to deal with ghosting artifacts and detail loss caused by camera motion or moving objects.
Experiments on real-world photos taken using handheld mobile phones show that the proposed method can generate high-quality images with faithful detail and vivid color rendition in both dark and bright areas.
arXiv Detail & Related papers (2020-04-20T07:00:14Z) - Learning Multi-Scale Photo Exposure Correction [51.57836446833474]
Capturing photographs with wrong exposures remains a major source of errors in camera-based imaging.
We propose a coarse-to-fine deep neural network (DNN) model, trainable in an end-to-end manner, that addresses each sub-problem separately.
Our method achieves results on par with existing state-of-the-art methods on underexposed images.
arXiv Detail & Related papers (2020-03-25T19:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.