Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration
- URL: http://arxiv.org/abs/2401.13221v1
- Date: Wed, 24 Jan 2024 04:25:12 GMT
- Title: Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration
- Authors: Yimin Xu, Nanxi Gao, Zhongyun Shan, Fei Chao, Rongrong Ji
- Abstract summary: We introduce a novel concept positing that intricate image degradation can be represented in terms of elementary degradation.
We propose the Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal components: a Width Adaptive Backbone (WAB) and a Width Selector (WS)
The proposed U-WADN achieves better performance while simultaneously reducing up to 32.3% of FLOPs and providing approximately 15.7% real-time acceleration.
- Score: 50.81374327480445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contrast to traditional image restoration methods, all-in-one image
restoration techniques are gaining increased attention for their ability to
restore images affected by diverse and unknown corruption types and levels.
However, contemporary all-in-one image restoration methods omit task-wise
difficulties and employ the same networks to reconstruct images afflicted by
diverse degradations. This practice leads to an underestimation of the task
correlations and suboptimal allocation of computational resources. To elucidate
task-wise complexities, we introduce a novel concept positing that intricate
image degradation can be represented in terms of elementary degradation.
Building upon this foundation, we propose an innovative approach, termed the
Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal
components: a Width Adaptive Backbone (WAB) and a Width Selector (WS). The WAB
incorporates several nested sub-networks with varying widths, which facilitates
the selection of the most apt computations tailored to each task, thereby
striking a balance between accuracy and computational efficiency during
runtime. For different inputs, the WS automatically selects the most
appropriate sub-network width, taking into account both task-specific and
sample-specific complexities. Extensive experiments across a variety of image
restoration tasks demonstrate that the proposed U-WADN achieves better
performance while simultaneously reducing up to 32.3\% of FLOPs and providing
approximately 15.7\% real-time acceleration. The code has been made available
at \url{https://github.com/xuyimin0926/U-WADN}.
Related papers
- Restorer: Solving Multiple Image Restoration Tasks with One Set of Parameters [3.0713650808646564]
We focus on designing a unified and effective solution for multiple image restoration tasks.
Based on the above purpose, we propose a Transformer network Restorer with U-Net architecture.
We show that Restorer has the potential to serve as a backbone for multiple real-world image restoration tasks.
arXiv Detail & Related papers (2024-06-18T13:18:32Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Multi-task Image Restoration Guided By Robust DINO Features [98.7455921708419]
We introduce mboxtextbfDINO-IR, a novel multi-task image restoration approach leveraging robust features extracted from DINOv2.
Our empirical analysis shows that while shallow features of DINOv2 capture rich low-level image characteristics, the deep features ensure a robust semantic representation insensitive to degradations.
arXiv Detail & Related papers (2023-12-04T06:59:55Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images [42.26124628784883]
We propose a compact, efficient, and end-to-end learning-based framework, namely AGD-Net.
We first formulate the problem explicitly based on the classic gradient descent algorithm.
AGD-Net can improve the reconstruction quality by more than 1.0 dB on average.
arXiv Detail & Related papers (2021-08-12T05:54:09Z) - Multi-Stage Progressive Image Restoration [167.6852235432918]
We propose a novel synergistic design that can optimally balance these competing goals.
Our main proposal is a multi-stage architecture, that progressively learns restoration functions for the degraded inputs.
The resulting tightly interlinked multi-stage architecture, named as MPRNet, delivers strong performance gains on ten datasets.
arXiv Detail & Related papers (2021-02-04T18:57:07Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.