Pruning Overparameterized Multi-Task Networks for Degraded Web Image Restoration
- URL: http://arxiv.org/abs/2510.14463v1
- Date: Thu, 16 Oct 2025 09:04:05 GMT
- Title: Pruning Overparameterized Multi-Task Networks for Degraded Web Image Restoration
- Authors: Thomas Katraouras, Dimitrios Rafailidis,
- Abstract summary: We propose a strategy for compressing multi-task image restoration models.<n>The proposed model, namely MIR-L, utilizes an iterative pruning strategy that removes low-magnitude weights.<n>Tests show that MIR-L retains only 10% of the trainable parameters while maintaining high image restoration performance.
- Score: 1.9336815376402718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image quality is a critical factor in delivering visually appealing content on web platforms. However, images often suffer from degradation due to lossy operations applied by online social networks (OSNs), negatively affecting user experience. Image restoration is the process of recovering a clean high-quality image from a given degraded input. Recently, multi-task (all-in-one) image restoration models have gained significant attention, due to their ability to simultaneously handle different types of image degradations. However, these models often come with an excessively high number of trainable parameters, making them computationally inefficient. In this paper, we propose a strategy for compressing multi-task image restoration models. We aim to discover highly sparse subnetworks within overparameterized deep models that can match or even surpass the performance of their dense counterparts. The proposed model, namely MIR-L, utilizes an iterative pruning strategy that removes low-magnitude weights across multiple rounds, while resetting the remaining weights to their original initialization. This iterative process is important for the multi-task image restoration model's optimization, effectively uncovering "winning tickets" that maintain or exceed state-of-the-art performance at high sparsity levels. Experimental evaluation on benchmark datasets for the deraining, dehazing, and denoising tasks shows that MIR-L retains only 10% of the trainable parameters while maintaining high image restoration performance. Our code, datasets and pre-trained models are made publicly available at https://github.com/Thomkat/MIR-L.
Related papers
- Edit2Restore:Few-Shot Image Restoration via Parameter-Efficient Adaptation of Pre-trained Editing Models [4.573600918393017]
We show that powerful pre-trained text-conditioned image editing models can be efficiently adapted for multiple restoration tasks.<n>Our approach fine-tunes LoRA adapters on FLUX.1 Kontext, a state-of-the-art 12B parameter flow matching model for image-to-image translation.
arXiv Detail & Related papers (2026-01-06T19:56:16Z) - UniCoRN: Latent Diffusion-based Unified Controllable Image Restoration Network across Multiple Degradations [4.892790389883125]
We propose UniCoRN, a unified image restoration approach capable of handling multiple degradation types simultaneously.<n>Specifically, we uncover the potential of low-level visual cues extracted from images in guiding a controllable diffusion model.<n>We also introduce MetaRestore, a metalens imaging benchmark containing images with multiple degradations and artifacts.
arXiv Detail & Related papers (2025-03-20T05:42:13Z) - HAIR: Hypernetworks-based All-in-One Image Restoration [46.681872835394095]
Hair is a Hypernetworks-based All-in-One Image Restoration plug-and-play method.
It generates parameters based on the input image and thus makes the model to adapt to specific degradation dynamically.
It can significantly improve the performance of existing image restoration models in a plug-and-play manner, both in single-task and All-in-One settings.
arXiv Detail & Related papers (2024-08-15T11:34:33Z) - Dynamic Pre-training: Towards Efficient and Scalable All-in-One Image Restoration [100.54419875604721]
All-in-one image restoration tackles different types of degradations with a unified model instead of having task-specific, non-generic models for each degradation.
We propose DyNet, a dynamic family of networks designed in an encoder-decoder style for all-in-one image restoration tasks.
Our DyNet can seamlessly switch between its bulkier and lightweight variants, thereby offering flexibility for efficient model deployment.
arXiv Detail & Related papers (2024-04-02T17:58:49Z) - Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration [50.81374327480445]
We introduce a novel concept positing that intricate image degradation can be represented in terms of elementary degradation.
We propose the Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal components: a Width Adaptive Backbone (WAB) and a Width Selector (WS)
The proposed U-WADN achieves better performance while simultaneously reducing up to 32.3% of FLOPs and providing approximately 15.7% real-time acceleration.
arXiv Detail & Related papers (2024-01-24T04:25:12Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Learning from History: Task-agnostic Model Contrastive Learning for
Image Restoration [79.04007257606862]
This paper introduces an innovative method termed 'learning from history', which dynamically generates negative samples from the target model itself.
Our approach, named Model Contrastive Learning for Image Restoration (MCLIR), rejuvenates latency models as negative models, making it compatible with diverse image restoration tasks.
arXiv Detail & Related papers (2023-09-12T07:50:54Z) - MOFA: A Model Simplification Roadmap for Image Restoration on Mobile
Devices [17.54747506334433]
We propose a roadmap that can be applied to further accelerate image restoration models prior to deployment.
Our approach decreases runtime by up to 13% and reduces the number of parameters by up to 23%, while increasing PSNR and SSIM.
arXiv Detail & Related papers (2023-08-24T01:29:15Z) - Rethinking PRL: A Multiscale Progressively Residual Learning Network for
Inverse Halftoning [3.632876183725243]
inverse halftoning is a classic image restoration task, aiming to recover continuous-tone images from halftone images with only bilevel pixels.
We propose an end-to-end multiscale progressively residual learning network (MSPRL), which has a UNet architecture and takes multiscale input images.
arXiv Detail & Related papers (2023-05-27T03:37:33Z) - Restormer: Efficient Transformer for High-Resolution Image Restoration [118.9617735769827]
convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data.
Transformers have shown significant performance gains on natural language and high-level vision tasks.
Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks.
arXiv Detail & Related papers (2021-11-18T18:59:10Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.