Deep Attentional Guided Image Filtering
- URL: http://arxiv.org/abs/2112.06401v1
- Date: Mon, 13 Dec 2021 03:26:43 GMT
- Title: Deep Attentional Guided Image Filtering
- Authors: Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao, Xiangyang Ji
- Abstract summary: Guided filter is a fundamental tool in computer vision and computer graphics.
We propose an effective framework named deep attentional guided image filtering.
We show that the proposed framework compares favorably with the state-of-the-art methods in a wide range of guided image filtering applications.
- Score: 90.20699804116646
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Guided filter is a fundamental tool in computer vision and computer graphics
which aims to transfer structure information from guidance image to target
image. Most existing methods construct filter kernels from the guidance itself
without considering the mutual dependency between the guidance and the target.
However, since there typically exist significantly different edges in the two
images, simply transferring all structural information of the guidance to the
target would result in various artifacts. To cope with this problem, we propose
an effective framework named deep attentional guided image filtering, the
filtering process of which can fully integrate the complementary information
contained in both images. Specifically, we propose an attentional kernel
learning module to generate dual sets of filter kernels from the guidance and
the target, respectively, and then adaptively combine them by modeling the
pixel-wise dependency between the two images. Meanwhile, we propose a
multi-scale guided image filtering module to progressively generate the
filtering result with the constructed kernels in a coarse-to-fine manner.
Correspondingly, a multi-scale fusion strategy is introduced to reuse the
intermediate results in the coarse-to-fine process. Extensive experiments show
that the proposed framework compares favorably with the state-of-the-art
methods in a wide range of guided image filtering applications, such as guided
super-resolution, cross-modality restoration, texture removal, and semantic
segmentation.
Related papers
- Guided Image Restoration via Simultaneous Feature and Image Guided
Fusion [67.30078778732998]
We propose a Simultaneous Feature and Image Guided Fusion (SFIGF) network.
It considers feature and image-level guided fusion following the guided filter (GF) mechanism.
Since guided fusion is implemented in both feature and image domains, the proposed SFIGF is expected to faithfully reconstruct both contextual and textual information.
arXiv Detail & Related papers (2023-12-14T12:15:45Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Image Completion via Dual-path Cooperative Filtering [17.62197747945094]
We propose a predictive filtering method for restoring images based on the input scene.
Deep feature-level semantic filtering is introduced to fill in missing information.
Experiments on three challenging image completion datasets show that our proposed DCF outperforms state-of-art methods.
arXiv Detail & Related papers (2023-04-30T03:54:53Z) - Semantic-aware Occlusion Filtering Neural Radiance Fields in the Wild [10.066261691282016]
We present a learning framework for reconstructing neural scene representations from unconstrained tourist photos.
We introduce SF-NeRF, aiming to disentangle the static and transient components with only a few images given.
We present two techniques to prevent ambiguous decomposition and noisy results of the filtering module.
arXiv Detail & Related papers (2023-03-05T11:50:34Z) - Single Stage Virtual Try-on via Deformable Attention Flows [51.70606454288168]
Virtual try-on aims to generate a photo-realistic fitting result given an in-shop garment and a reference person image.
We develop a novel Deformable Attention Flow (DAFlow) which applies the deformable attention scheme to multi-flow estimation.
Our proposed method achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-07-19T10:01:31Z) - Multi-scale Image Decomposition using a Local Statistical Edge Model [0.0]
We present a progressive image decomposition method based on a novel non-linear filter named Sub-window Variance filter.
Our method is specifically designed for image detail enhancement purpose.
arXiv Detail & Related papers (2021-05-05T09:38:07Z) - TSIT: A Simple and Versatile Framework for Image-to-Image Translation [103.92203013154403]
We introduce a simple and versatile framework for image-to-image translation.
We provide a carefully designed two-stream generative model with newly proposed feature transformations.
This allows multi-scale semantic structure information and style representation to be effectively captured and fused by the network.
A systematic study compares the proposed method with several state-of-the-art task-specific baselines, verifying its effectiveness in both perceptual quality and quantitative evaluations.
arXiv Detail & Related papers (2020-07-23T15:34:06Z) - Multi-Channel Attention Selection GANs for Guided Image-to-Image
Translation [148.9985519929653]
We propose a novel model named Multi-Channel Attention Selection Generative Adversarial Network (SelectionGAN) for guided image-to-image translation.
The proposed framework and modules are unified solutions and can be applied to solve other generation tasks such as semantic image synthesis.
arXiv Detail & Related papers (2020-02-03T23:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.