Learning Distortion Invariant Representation for Image Restoration from
A Causality Perspective
- URL: http://arxiv.org/abs/2303.06859v2
- Date: Fri, 31 Mar 2023 08:02:01 GMT
- Title: Learning Distortion Invariant Representation for Image Restoration from
A Causality Perspective
- Authors: Xin Li, Bingchen Li, Xin Jin, Cuiling Lan, Zhibo Chen
- Abstract summary: We propose a novel training strategy for image restoration from the causality perspective.
Our method, termed Distortion Invariant representation Learning (DIL), treats each distortion type and degree as one specific confounder.
- Score: 42.10777921339209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, we have witnessed the great advancement of Deep neural
networks (DNNs) in image restoration. However, a critical limitation is that
they cannot generalize well to real-world degradations with different degrees
or types. In this paper, we are the first to propose a novel training strategy
for image restoration from the causality perspective, to improve the
generalization ability of DNNs for unknown degradations. Our method, termed
Distortion Invariant representation Learning (DIL), treats each distortion type
and degree as one specific confounder, and learns the distortion-invariant
representation by eliminating the harmful confounding effect of each
degradation. We derive our DIL with the back-door criterion in causality by
modeling the interventions of different distortions from the optimization
perspective. Particularly, we introduce counterfactual distortion augmentation
to simulate the virtual distortion types and degrees as the confounders. Then,
we instantiate the intervention of each distortion with a virtual model
updating based on corresponding distorted images, and eliminate them from the
meta-learning perspective. Extensive experiments demonstrate the effectiveness
of our DIL on the generalization capability for unseen distortion types and
degrees. Our code will be available at
https://github.com/lixinustc/Causal-IR-DIL.
Related papers
- Taming Generative Diffusion Prior for Universal Blind Image Restoration [4.106012295148947]
BIR-D is able to fulfill multi-guidance blind image restoration.
It can also restore images that undergo multiple and complicated degradations, demonstrating the practical applications.
arXiv Detail & Related papers (2024-08-21T02:19:54Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - DaliID: Distortion-Adaptive Learned Invariance for Identification Models [9.502663556403622]
We propose a methodology called Distortion-Adaptive Learned Invariance for Identification (DaliID) models.
DaliID models achieve state-of-the-art (SOTA) for both face recognition and person re-identification on seven benchmark datasets.
arXiv Detail & Related papers (2023-02-11T18:19:41Z) - RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline
Model and DoF-based Curriculum Learning [62.86400614141706]
We propose a new learning model, i.e., Rectangling Rectification Network (RecRecNet)
Our model can flexibly warp the source structure to the target domain and achieves an end-to-end unsupervised deformation.
Experiments show the superiority of our solution over the compared methods on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2023-01-04T15:12:57Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - A Deep Ordinal Distortion Estimation Approach for Distortion Rectification [62.72089758481803]
We propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency.
We design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution.
Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation.
arXiv Detail & Related papers (2020-07-21T10:03:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.