Guided Image Restoration via Simultaneous Feature and Image Guided
Fusion
- URL: http://arxiv.org/abs/2312.08853v1
- Date: Thu, 14 Dec 2023 12:15:45 GMT
- Title: Guided Image Restoration via Simultaneous Feature and Image Guided
Fusion
- Authors: Xinyi Liu, Qian Zhao, Jie Liang, Hui Zeng, Deyu Meng and Lei Zhang
- Abstract summary: We propose a Simultaneous Feature and Image Guided Fusion (SFIGF) network.
It considers feature and image-level guided fusion following the guided filter (GF) mechanism.
Since guided fusion is implemented in both feature and image domains, the proposed SFIGF is expected to faithfully reconstruct both contextual and textual information.
- Score: 67.30078778732998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Guided image restoration (GIR), such as guided depth map super-resolution and
pan-sharpening, aims to enhance a target image using guidance information from
another image of the same scene. Currently, joint image filtering-inspired deep
learning-based methods represent the state-of-the-art for GIR tasks. Those
methods either deal with GIR in an end-to-end way by elaborately designing
filtering-oriented deep neural network (DNN) modules, focusing on the
feature-level fusion of inputs; or explicitly making use of the traditional
joint filtering mechanism by parameterizing filtering coefficients with DNNs,
working on image-level fusion. The former ones are good at recovering
contextual information but tend to lose fine-grained details, while the latter
ones can better retain textual information but might lead to content
distortions. In this work, to inherit the advantages of both methodologies
while mitigating their limitations, we proposed a Simultaneous Feature and
Image Guided Fusion (SFIGF) network, that simultaneously considers feature and
image-level guided fusion following the guided filter (GF) mechanism. In the
feature domain, we connect the cross-attention (CA) with GF, and propose a
GF-inspired CA module for better feature-level fusion; in the image domain, we
fully explore the GF mechanism and design GF-like structure for better
image-level fusion. Since guided fusion is implemented in both feature and
image domains, the proposed SFIGF is expected to faithfully reconstruct both
contextual and textual information from sources and thus lead to better GIR
results. We apply SFIGF to 4 typical GIR tasks, and experimental results on
these tasks demonstrate its effectiveness and general availability.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - DAF-Net: A Dual-Branch Feature Decomposition Fusion Network with Domain Adaptive for Infrared and Visible Image Fusion [21.64382683858586]
Infrared and visible image fusion aims to combine complementary information from both modalities to provide a more comprehensive scene understanding.
We propose a dual-branch feature decomposition fusion network (DAF-Net) with Maximum domain adaptive.
By incorporating MK-MMD, the DAF-Net effectively aligns the latent feature spaces of visible and infrared images, thereby improving the quality of the fused images.
arXiv Detail & Related papers (2024-09-18T02:14:08Z) - Mutual-Guided Dynamic Network for Image Fusion [51.615598671899335]
We propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs.
Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks.
arXiv Detail & Related papers (2023-08-24T03:50:37Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image
Fusion [29.405149234582623]
Multi-Focus Image Fusion (MFIF) is a promising technique to obtain all-in-focus images.
One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB)
We propose a network termed MFIF-GAN to generate focus maps in which the foreground region are correctly larger than the corresponding objects.
arXiv Detail & Related papers (2020-09-21T09:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.