Multi-task Image Restoration Guided By Robust DINO Features
- URL: http://arxiv.org/abs/2312.01677v2
- Date: Tue, 5 Dec 2023 17:46:12 GMT
- Title: Multi-task Image Restoration Guided By Robust DINO Features
- Authors: Xin Lin, Chao Ren, Kelvin C.K. Chan, Lu Qi, Jinshan Pan, Ming-Hsuan
Yang
- Abstract summary: We introduce mboxtextbfDINO-IR, a novel multi-task image restoration approach leveraging robust features extracted from DINOv2.
Our empirical analysis shows that while shallow features of DINOv2 capture rich low-level image characteristics, the deep features ensure a robust semantic representation insensitive to degradations.
- Score: 98.7455921708419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task image restoration has gained significant interest due to its
inherent versatility and efficiency compared to its single-task counterpart.
Despite its potential, performance degradation is observed with an increase in
the number of tasks, primarily attributed to the distinct nature of each
restoration task. Addressing this challenge, we introduce
\mbox{\textbf{DINO-IR}}, a novel multi-task image restoration approach
leveraging robust features extracted from DINOv2. Our empirical analysis shows
that while shallow features of DINOv2 capture rich low-level image
characteristics, the deep features ensure a robust semantic representation
insensitive to degradations while preserving high-frequency contour details.
Building on these features, we devise specialized components, including
multi-layer semantic fusion module, DINO-Restore adaption and fusion module,
and DINO perception contrastive loss, to integrate DINOv2 features into the
restoration paradigm. Equipped with the aforementioned components, our DINO-IR
performs favorably against existing multi-task image restoration approaches in
various tasks by a large margin, indicating the superiority and necessity of
reinforcing the robust features for multi-task image restoration.
Related papers
- UniLDiff: Unlocking the Power of Diffusion Priors for All-in-One Image Restoration [16.493990086330985]
UniLDiff is a unified framework enhanced with degradation- and detail-aware mechanisms.<n>We introduce a Degradation-Aware Feature Fusion (DAFF) to dynamically inject low-quality features into each denoising step.<n>We also design a Detail-Aware Expert Module (DAEM) in the decoder to enhance texture and fine-structure recovery.
arXiv Detail & Related papers (2025-07-31T16:02:00Z) - Mixed Degradation Image Restoration via Local Dynamic Optimization and Conditional Embedding [67.57487747508179]
Multiple-in-one image restoration (IR) has made significant progress, aiming to handle all types of single degraded image restoration with a single model.
In this paper, we propose a novel multiple-in-one IR model that can effectively restore images with both single and mixed degradations.
arXiv Detail & Related papers (2024-11-25T09:26:34Z) - VmambaIR: Visual State Space Model for Image Restoration [36.11385876754612]
We propose VmambaIR, which introduces State Space Models (SSMs) with linear complexity into comprehensive image restoration tasks.
VmambaIR achieves state-of-the-art (SOTA) performance with much fewer computational resources and parameters.
arXiv Detail & Related papers (2024-03-18T02:38:55Z) - FeatUp: A Model-Agnostic Framework for Features at Any Resolution [24.4201195336725]
FeatUp is a task- and model-agnostic framework to restore lost spatial information in deep features.
We introduce two variants of FeatUp: one that guides features with high-resolution signal in a single forward pass, and one that fits an implicit model to a single image to reconstruct features at any resolution.
We show that FeatUp significantly outperforms other feature upsampling and image super-resolution approaches in class activation map generation, transfer learning for segmentation and depth prediction, and end-to-end training for semantic segmentation.
arXiv Detail & Related papers (2024-03-15T17:57:06Z) - Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration [50.81374327480445]
We introduce a novel concept positing that intricate image degradation can be represented in terms of elementary degradation.
We propose the Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal components: a Width Adaptive Backbone (WAB) and a Width Selector (WS)
The proposed U-WADN achieves better performance while simultaneously reducing up to 32.3% of FLOPs and providing approximately 15.7% real-time acceleration.
arXiv Detail & Related papers (2024-01-24T04:25:12Z) - Harnessing Diffusion Models for Visual Perception with Meta Prompts [68.78938846041767]
We propose a simple yet effective scheme to harness a diffusion model for visual perception tasks.
We introduce learnable embeddings (meta prompts) to the pre-trained diffusion models to extract proper features for perception.
Our approach achieves new performance records in depth estimation tasks on NYU depth V2 and KITTI, and in semantic segmentation task on CityScapes.
arXiv Detail & Related papers (2023-12-22T14:40:55Z) - DRM-IR: Task-Adaptive Deep Unfolding Network for All-In-One Image
Restoration [5.573836220587265]
This work proposes an efficient Dynamic Reference Modeling paradigm (DRM-IR)
DRM-IR consists of task-adaptive degradation modeling and model-based image restoring.
Experiments on multiple benchmark datasets show that our DRM-IR achieves state-of-the-art in All-In-One IR.
arXiv Detail & Related papers (2023-07-15T02:42:19Z) - Super-resolution Reconstruction of Single Image for Latent features [8.857209365343646]
Single-image super-resolution (SISR) typically focuses on restoring various degraded low-resolution (LR) images to a single high-resolution (HR) image.
It is often challenging for models to simultaneously maintain high quality and rapid sampling while preserving diversity in details and texture features.
This challenge can lead to issues such as model collapse, lack of rich details and texture features in the reconstructed HR images, and excessive time consumption for model sampling.
arXiv Detail & Related papers (2022-11-16T09:37:07Z) - Accurate and Lightweight Image Super-Resolution with Model-Guided Deep
Unfolding Network [63.69237156340457]
We present and advocate an explainable approach toward SISR named model-guided deep unfolding network (MoG-DUN)
MoG-DUN is accurate (producing fewer aliasing artifacts), computationally efficient (with reduced model parameters), and versatile (capable of handling multiple degradations)
The superiority of the proposed MoG-DUN method to existing state-of-theart image methods including RCAN, SRDNF, and SRFBN is substantiated by extensive experiments on several popular datasets and various degradation scenarios.
arXiv Detail & Related papers (2020-09-14T08:23:37Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.