ExpRDiff: Short-exposure Guided Diffusion Model for Realistic Local Motion Deblurring
- URL: http://arxiv.org/abs/2412.09193v1
- Date: Thu, 12 Dec 2024 11:42:39 GMT
- Title: ExpRDiff: Short-exposure Guided Diffusion Model for Realistic Local Motion Deblurring
- Authors: Zhongbao Yang, Jiangxin Dong, Jinhui Tang, Jinshan Pan,
- Abstract summary: We develop a context-based local blur detection module that incorporates additional contextual information to improve the identification of blurry regions.
Considering that modern smartphones are equipped with cameras capable of providing short-exposure images, we develop a blur-aware guided image restoration method.
We formulate the above components into a simple yet effective network, named ExpRDiff.
- Score: 61.82010103478833
- License:
- Abstract: Removing blur caused by moving objects is challenging, as the moving objects are usually significantly blurry while the static background remains clear. Existing methods that rely on local blur detection often suffer from inaccuracies and cannot generate satisfactory results when focusing solely on blurred regions. To overcome these problems, we first design a context-based local blur detection module that incorporates additional contextual information to improve the identification of blurry regions. Considering that modern smartphones are equipped with cameras capable of providing short-exposure images, we develop a blur-aware guided image restoration method that utilizes sharp structural details from short-exposure images, facilitating accurate reconstruction of heavily blurred regions. Furthermore, to restore images realistically and visually-pleasant, we develop a short-exposure guided diffusion model that explores useful features from short-exposure images and blurred regions to better constrain the diffusion process. Finally, we formulate the above components into a simple yet effective network, named ExpRDiff. Experimental results show that ExpRDiff performs favorably against state-of-the-art methods.
Related papers
- Sparse-DeRF: Deblurred Neural Radiance Fields from Sparse View [17.214047499850487]
This paper focuses on constructing deblurred neural radiance fields (DeRF) from sparse-view for more pragmatic real-world scenarios.
Sparse-DeRF successfully regularizes the complicated joint optimization, presenting alleviated overfitting artifacts and enhanced quality on radiance fields.
We demonstrate the effectiveness of the Sparse-DeRF with extensive quantitative and qualitative experimental results by training DeRF from 2-view, 4-view, and 6-view blurry images.
arXiv Detail & Related papers (2024-07-09T07:36:54Z) - CLIPAway: Harmonizing Focused Embeddings for Removing Objects via Diffusion Models [16.58831310165623]
CLIPAway is a novel approach leveraging CLIP embeddings to focus on background regions while excluding foreground elements.
It enhances inpainting accuracy and quality by identifying embeddings that prioritize the background.
Unlike other methods that rely on specialized training datasets or costly manual annotations, CLIPAway provides a flexible, plug-and-play solution.
arXiv Detail & Related papers (2024-06-13T17:50:28Z) - DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - Global Structure-Aware Diffusion Process for Low-Light Image Enhancement [64.69154776202694]
This paper studies a diffusion-based framework to address the low-light image enhancement problem.
We advocate for the regularization of its inherent ODE-trajectory.
Experimental evaluations reveal that the proposed framework attains distinguished performance in low-light enhancement.
arXiv Detail & Related papers (2023-10-26T17:01:52Z) - Fearless Luminance Adaptation: A Macro-Micro-Hierarchical Transformer
for Exposure Correction [65.5397271106534]
A single neural network is difficult to handle all exposure problems.
In particular, convolutions hinder the ability to restore faithful color or details on extremely over-/under- exposed regions.
We propose a Macro-Micro-Hierarchical transformer, which consists of a macro attention to capture long-range dependencies, a micro attention to extract local features, and a hierarchical structure for coarse-to-fine correction.
arXiv Detail & Related papers (2023-09-02T09:07:36Z) - Adaptive Window Pruning for Efficient Local Motion Deblurring [81.35217764881048]
Local motion blur commonly occurs in real-world photography due to the mixing between moving objects and stationary backgrounds during exposure.
Existing image deblurring methods predominantly focus on global deblurring.
This paper aims to adaptively and efficiently restore high-resolution locally blurred images.
arXiv Detail & Related papers (2023-06-25T15:24:00Z) - Take a Prior from Other Tasks for Severe Blur Removal [52.380201909782684]
Cross-level feature learning strategy based on knowledge distillation to learn the priors.
Semantic prior embedding layer with multi-level aggregation and semantic attention transformation to integrate the priors effectively.
Experiments on natural image deblurring benchmarks and real-world images, such as GoPro and RealBlur datasets, demonstrate our method's effectiveness and ability.
arXiv Detail & Related papers (2023-02-14T08:30:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.