Recurrent Feature Reasoning for Image Inpainting
- URL: http://arxiv.org/abs/2008.03737v1
- Date: Sun, 9 Aug 2020 14:40:04 GMT
- Title: Recurrent Feature Reasoning for Image Inpainting
- Authors: Jingyuan Li, Ning Wang, Lefei Zhang, Bo Du, Dacheng Tao
- Abstract summary: Recurrent Feature Reasoning (RFR) network is mainly constructed by a plug-and-play Recurrent Feature Reasoning module and a Knowledge Consistent Attention (KCA) module.
RFR module recurrently infers the hole boundaries of the convolutional feature maps and then uses them as clues for further inference.
To capture information from distant places in the feature map for RFR, we further develop KCA and incorporate it in RFR.
- Score: 110.24760191732905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing inpainting methods have achieved promising performance for
recovering regular or small image defects. However, filling in large continuous
holes remains difficult due to the lack of constraints for the hole center. In
this paper, we devise a Recurrent Feature Reasoning (RFR) network which is
mainly constructed by a plug-and-play Recurrent Feature Reasoning module and a
Knowledge Consistent Attention (KCA) module. Analogous to how humans solve
puzzles (i.e., first solve the easier parts and then use the results as
additional information to solve difficult parts), the RFR module recurrently
infers the hole boundaries of the convolutional feature maps and then uses them
as clues for further inference. The module progressively strengthens the
constraints for the hole center and the results become explicit. To capture
information from distant places in the feature map for RFR, we further develop
KCA and incorporate it in RFR. Empirically, we first compare the proposed
RFR-Net with existing backbones, demonstrating that RFR-Net is more efficient
(e.g., a 4\% SSIM improvement for the same model size). We then place the
network in the context of the current state-of-the-art, where it exhibits
improved performance. The corresponding source code is available at:
https://github.com/jingyuanli001/RFR-Inpainting
Related papers
- Look-Around Before You Leap: High-Frequency Injected Transformer for Image Restoration [46.96362010335177]
In this paper, we propose HIT, a simple yet effective High-frequency Injected Transformer for image restoration.
Specifically, we design a window-wise injection module (WIM), which incorporates abundant high-frequency details into the feature map, to provide reliable references for restoring high-quality images.
In addition, we introduce a spatial enhancement unit (SEU) to preserve essential spatial relationships that may be lost due to the computations carried out across channel dimensions in the BIM.
arXiv Detail & Related papers (2024-03-30T08:05:00Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Reconstruction-driven Dynamic Refinement based Unsupervised Domain
Adaptation for Joint Optic Disc and Cup Segmentation [25.750583118977833]
Glaucoma is one of the leading causes of irreversible blindness.
It remains challenging to train an OD/OC segmentation model that could be deployed successfully to different healthcare centers.
We propose a novel unsupervised domain adaptation (UDA) method called Reconstruction-driven Dynamic Refinement Network (RDR-Net)
arXiv Detail & Related papers (2023-04-10T13:33:13Z) - RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline
Model and DoF-based Curriculum Learning [62.86400614141706]
We propose a new learning model, i.e., Rectangling Rectification Network (RecRecNet)
Our model can flexibly warp the source structure to the target domain and achieves an end-to-end unsupervised deformation.
Experiments show the superiority of our solution over the compared methods on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2023-01-04T15:12:57Z) - Recursive Fusion and Deformable Spatiotemporal Attention for Video
Compression Artifact Reduction [36.255863808004065]
deep learning algorithms have been proposed to recover high-quality videos from low-quality compressed ones.
In this paper, we propose Recursive Fusion (RF) module to model the temporal dependency within a long temporal range.
We also design an efficient and effective Deformabletemporal Stemporal Attention (DSTA) module to pay more effort on restoring the artifact-rich areas.
arXiv Detail & Related papers (2021-08-04T15:25:27Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - NNCFR: Minimize Counterfactual Regret with Neural Networks [4.418221583366099]
This paper introduces textitNeural Network Counterfactual Regret Minimization (NNCFR), an improved variant of textitDeep CFR.
The textitNNCFR converges faster and performs more stable than textitDeep CFR, and outperforms textitDeep CFR with respect to exploitability and head-to-head performance on test games.
arXiv Detail & Related papers (2021-05-26T04:58:36Z) - Progressively Guided Alternate Refinement Network for RGB-D Salient
Object Detection [63.18846475183332]
We aim to develop an efficient and compact deep network for RGB-D salient object detection.
We propose a progressively guided alternate refinement network to refine it.
Our model outperforms existing state-of-the-art approaches by a large margin.
arXiv Detail & Related papers (2020-08-17T02:55:06Z) - Relational Deep Feature Learning for Heterogeneous Face Recognition [17.494718795454055]
We propose a graph-structured module called Graph Module (NIR) that extracts global relational information in addition to general facial features.
The proposed method outperforms other state-of-the-art methods on five Heterogeneous Face Recognition (HFR) databases.
arXiv Detail & Related papers (2020-03-02T07:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.