Burst Image Restoration and Enhancement
- URL: http://arxiv.org/abs/2110.03680v1
- Date: Thu, 7 Oct 2021 17:58:56 GMT
- Title: Burst Image Restoration and Enhancement
- Authors: Akshay Dudhane, Syed Waqas Zamir, Salman Khan, Fahad Khan, Ming-Hsuan
Yang
- Abstract summary: The goal of Burst Image Restoration is to effectively combine complimentary cues across multiple burst frames to generate high-quality outputs.
We create a set of emphpseudo-burst features that combine complimentary information from all the input burst frames to seamlessly exchange information.
Our approach delivers state of the art performance on burst super-resolution and low-light image enhancement tasks.
- Score: 86.08546447144377
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Modern handheld devices can acquire burst image sequence in a quick
succession. However, the individual acquired frames suffer from multiple
degradations and are misaligned due to camera shake and object motions. The
goal of Burst Image Restoration is to effectively combine complimentary cues
across multiple burst frames to generate high-quality outputs. Towards this
goal, we develop a novel approach by solely focusing on the effective
information exchange between burst frames, such that the degradations get
filtered out while the actual scene details are preserved and enhanced. Our
central idea is to create a set of \emph{pseudo-burst} features that combine
complimentary information from all the input burst frames to seamlessly
exchange information. The pseudo-burst representations encode channel-wise
features from the original burst images, thus making it easier for the model to
learn distinctive information offered by multiple burst frames. However, the
pseudo-burst cannot be successfully created unless the individual burst frames
are properly aligned to discount inter-frame movements. Therefore, our approach
initially extracts preprocessed features from each burst frame and matches them
using an edge-boosting burst alignment module. The pseudo-burst features are
then created and enriched using multi-scale contextual information. Our final
step is to adaptively aggregate information from the pseudo-burst features to
progressively increase resolution in multiple stages while merging the
pseudo-burst features. In comparison to existing works that usually follow a
late fusion scheme with single-stage upsampling, our approach performs
favorably, delivering state of the art performance on burst super-resolution
and low-light image enhancement tasks. Our codes and models will be released
publicly.
Related papers
- Neural Spline Fields for Burst Image Fusion and Layer Separation [40.9442467471977]
We propose a versatile intermediate representation: a two-layer alpha-composited image plus flow model constructed with neural spline fields.
Our method is able to jointly fuse a burst image capture into one high-resolution reconstruction and decompose it into transmission and obstruction layers.
We find that, with no post-processing steps or learned priors, our generalizable model is able to outperform existing dedicated single-image and multi-view obstruction removal approaches.
arXiv Detail & Related papers (2023-12-21T18:54:19Z) - Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - Pixel-Aware Stable Diffusion for Realistic Image Super-resolution and Personalized Stylization [23.723573179119228]
We propose a pixel-aware stable diffusion (PASD) network to achieve robust Real-ISR and personalized image stylization.
A pixel-aware cross attention module is introduced to enable diffusion models perceiving image local structures in pixel-wise level.
An adjustable noise schedule is introduced to further improve the image restoration results.
arXiv Detail & Related papers (2023-08-28T10:15:57Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Gated Multi-Resolution Transfer Network for Burst Restoration and
Enhancement [75.25451566988565]
We propose a novel Gated Multi-Resolution Transfer Network (GMTNet) to reconstruct a spatially precise high-quality image from a burst of low-quality raw images.
Detailed experimental analysis on five datasets validates our approach and sets a state-of-the-art for burst super-resolution, burst denoising, and low-light burst enhancement.
arXiv Detail & Related papers (2023-04-13T17:54:00Z) - Burstormer: Burst Image Restoration and Enhancement Transformer [117.56199661345993]
On a shutter press, modern handheld cameras capture multiple images in rapid succession and merge them to generate a single image.
The challenge is to properly align the successive image shots and merge their complimentary information to achieve high-quality outputs.
We propose Burstormer: a novel transformer-based architecture for burst image restoration and enhancement.
arXiv Detail & Related papers (2023-04-03T17:58:44Z) - Efficient Flow-Guided Multi-frame De-fencing [7.504789972841539]
De-fencing is the algorithmic process of automatically removing such obstructions from images.
We develop a framework for multi-frame de-fencing that computes high quality flow maps directly from obstructed frames.
arXiv Detail & Related papers (2023-01-25T18:42:59Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.