Deep-Masking Generative Network: A Unified Framework for Background
Restoration from Superimposed Images
- URL: http://arxiv.org/abs/2010.04324v2
- Date: Mon, 12 Apr 2021 09:47:26 GMT
- Title: Deep-Masking Generative Network: A Unified Framework for Background
Restoration from Superimposed Images
- Authors: Xin Feng, Wenjie Pei, Zihui Jia, Fanglin Chen, David Zhang, and
Guangming Lu
- Abstract summary: We present the Deep-Masking Generative Network (DMGN), which is a unified framework for background restoration from superimposed images.
A coarse background image and a noise image are first generated in parallel, then the noise image is further leveraged to refine the background image.
Our experiments show that our DMGN consistently outperforms state-of-the-art methods specifically designed for each single task.
- Score: 36.7646332887842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Restoring the clean background from the superimposed images containing a
noisy layer is the common crux of a classical category of tasks on image
restoration such as image reflection removal, image deraining and image
dehazing. These tasks are typically formulated and tackled individually due to
the diverse and complicated appearance patterns of noise layers within the
image. In this work we present the Deep-Masking Generative Network (DMGN),
which is a unified framework for background restoration from the superimposed
images and is able to cope with different types of noise. Our proposed DMGN
follows a coarse-to-fine generative process: a coarse background image and a
noise image are first generated in parallel, then the noise image is further
leveraged to refine the background image to achieve a higher-quality background
image. In particular, we design the novel Residual Deep-Masking Cell as the
core operating unit for our DMGN to enhance the effective information and
suppress the negative information during image generation via learning a gating
mask to control the information flow. By iteratively employing this Residual
Deep-Masking Cell, our proposed DMGN is able to generate both high-quality
background image and noisy image progressively. Furthermore, we propose a
two-pronged strategy to effectively leverage the generated noise image as
contrasting cues to facilitate the refinement of the background image.
Extensive experiments across three typical tasks for image background
restoration, including image reflection removal, image rain steak removal and
image dehazing, show that our DMGN consistently outperforms state-of-the-art
methods specifically designed for each single task.
Related papers
- MRIR: Integrating Multimodal Insights for Diffusion-based Realistic Image Restoration [17.47612023350466]
We propose MRIR, a diffusion-based restoration method with multimodal insights.
For the textual level, we harness the power of the pre-trained multimodal large language model to infer meaningful semantic information from low-quality images.
For the visual level, we mainly focus on the pixel level control. Thus, we utilize a Pixel-level Processor and ControlNet to control spatial structures.
arXiv Detail & Related papers (2024-07-04T04:55:14Z) - BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed
Dual-Branch Diffusion [61.90969199199739]
BrushNet is a novel plug-and-play dual-branch model engineered to embed pixel-level masked image features into any pre-trained DM.
BrushNet's superior performance over existing models across seven key metrics, including image quality, mask region preservation, and textual coherence.
arXiv Detail & Related papers (2024-03-11T17:59:31Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Deep Iteration Assisted by Multi-level Obey-pixel Network Discriminator
(DIAMOND) for Medical Image Recovery [0.6719751155411076]
Both traditional iterative and up-to-date deep networks have attracted much attention and obtained a significant improvement in reconstructing satisfying images.
This study combines their advantages into one unified mathematical model and proposes a general image restoration strategy to deal with such problems.
arXiv Detail & Related papers (2021-02-08T16:57:33Z) - Dual Adversarial Network: Toward Real-world Noise Removal and Noise
Generation [52.75909685172843]
Real-world image noise removal is a long-standing yet very challenging task in computer vision.
We propose a novel unified framework to deal with the noise removal and noise generation tasks.
Our method learns the joint distribution of the clean-noisy image pairs.
arXiv Detail & Related papers (2020-07-12T09:16:06Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.