Blind Image Decomposition
- URL: http://arxiv.org/abs/2108.11364v1
- Date: Wed, 25 Aug 2021 17:37:19 GMT
- Title: Blind Image Decomposition
- Authors: Junlin Han, Weihao Li, Pengfei Fang, Chunyi Sun, Jie Hong, Mohammad
Ali Armin, Lars Petersson, Hongdong Li
- Abstract summary: We present Blind Image Decomposition (BID), which requires separating a superimposed image into constituent underlying images in a blind setting.
How to decompose superimposed images, like rainy images, into distinct source components is a crucial step towards real-world vision systems.
We propose a simple yet general Blind Image Decomposition Network (BIDeN) to serve as a strong baseline for future work.
- Score: 53.760745569495825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present and study a novel task named Blind Image Decomposition (BID),
which requires separating a superimposed image into constituent underlying
images in a blind setting, that is, both the source components involved in
mixing as well as the mixing mechanism are unknown. For example, rain may
consist of multiple components, such as rain streaks, raindrops, snow, and
haze. Rainy images can be treated as an arbitrary combination of these
components, some of them or all of them. How to decompose superimposed images,
like rainy images, into distinct source components is a crucial step towards
real-world vision systems. To facilitate research on this new task, we
construct three benchmark datasets, including mixed image decomposition across
multiple domains, real-scenario deraining, and joint
shadow/reflection/watermark removal. Moreover, we propose a simple yet general
Blind Image Decomposition Network (BIDeN) to serve as a strong baseline for
future work. Experimental results demonstrate the tenability of our benchmarks
and the effectiveness of BIDeN. Code and project page are available.
Related papers
- Factorized Diffusion: Perceptual Illusions by Noise Decomposition [15.977340635967018]
We present a zero-shot method to control each individual component through diffusion model sampling.
For certain decompositions, our method recovers prior approaches to compositional generation and spatial control.
We show that we can extend our approach to generate hybrid images from real images.
arXiv Detail & Related papers (2024-04-17T17:59:59Z) - MULAN: A Multi Layer Annotated Dataset for Controllable Text-to-Image Generation [54.64194935409982]
We introduce MuLAn: a novel dataset comprising over 44K MUlti-Layer-wise RGBA decompositions.
MuLAn is the first photorealistic resource providing instance decomposition and spatial information for high quality images.
We aim to encourage the development of novel generation and editing technology, in particular layer-wise solutions.
arXiv Detail & Related papers (2024-04-03T14:58:00Z) - Strong and Controllable Blind Image Decomposition [57.682079186903195]
Blind image decomposition aims to decompose all components present in an image.
Users might want to retain certain degradations, such as watermarks, for copyright protection.
We design an architecture named controllable blind image decomposition network.
arXiv Detail & Related papers (2024-03-15T17:59:44Z) - Neural Spline Fields for Burst Image Fusion and Layer Separation [40.9442467471977]
We propose a versatile intermediate representation: a two-layer alpha-composited image plus flow model constructed with neural spline fields.
Our method is able to jointly fuse a burst image capture into one high-resolution reconstruction and decompose it into transmission and obstruction layers.
We find that, with no post-processing steps or learned priors, our generalizable model is able to outperform existing dedicated single-image and multi-view obstruction removal approaches.
arXiv Detail & Related papers (2023-12-21T18:54:19Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Beyond Monocular Deraining: Parallel Stereo Deraining Network Via
Semantic Prior [103.49307603952144]
Most existing de-rain algorithms use only one single input image and aim to recover a clean image.
We present a Paired Rain Removal Network (PRRNet), which exploits both stereo images and semantic information.
Experiments on both monocular and the newly proposed stereo rainy datasets demonstrate that the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-09T04:15:10Z) - Exploiting Global and Local Attentions for Heavy Rain Removal on Single
Images [35.596659286313766]
Heavy rain removal from a single image is the task of simultaneously eliminating rain streaks and fog.
Most existing rain removal methods do not generalize well for the heavy rain case.
We propose a novel network architecture consisting of three sub-networks to remove heavy rain from a single image.
arXiv Detail & Related papers (2021-04-16T14:08:27Z) - Multi-Scale Progressive Fusion Network for Single Image Deraining [84.0466298828417]
Rain streaks in the air appear in various blurring degrees and resolutions due to different distances from their positions to the camera.
Similar rain patterns are visible in a rain image as well as its multi-scale (or multi-resolution) versions.
In this work, we explore the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features.
arXiv Detail & Related papers (2020-03-24T17:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.