Deep LoRA-Unfolding Networks for Image Restoration
- URL: http://arxiv.org/abs/2602.18697v1
- Date: Sat, 21 Feb 2026 02:57:48 GMT
- Title: Deep LoRA-Unfolding Networks for Image Restoration
- Authors: Xiangming Wang, Haijin Zeng, Benteng Sun, Jiezhang Cao, Kai Zhang, Qiangqiang Shen, Yongyong Chen,
- Abstract summary: We introduce generalized Deep Low-rank Adaptation (LoRA) Unfolding Networks for image restoration.<n>LoRun introduces a novel paradigm where a single pretrained base denoiser is shared across all stages.<n> lightweight, stage-specific LoRA adapters are injected into the PMMs to dynamically modulate denoising behavior according to the noise level.
- Score: 44.864335449093716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep unfolding networks (DUNs), combining conventional iterative optimization algorithms and deep neural networks into a multi-stage framework, have achieved remarkable accomplishments in Image Restoration (IR), such as spectral imaging reconstruction, compressive sensing and super-resolution.It unfolds the iterative optimization steps into a stack of sequentially linked blocks.Each block consists of a Gradient Descent Module (GDM) and a Proximal Mapping Module (PMM) which is equivalent to a denoiser from a Bayesian perspective, operating on Gaussian noise with a known level.However, existing DUNs suffer from two critical limitations: (i) their PMMs share identical architectures and denoising objectives across stages, ignoring the need for stage-specific adaptation to varying noise levels; and (ii) their chain of structurally repetitive blocks results in severe parameter redundancy and high memory consumption, hindering deployment in large-scale or resource-constrained scenarios.To address these challenges, we introduce generalized Deep Low-rank Adaptation (LoRA) Unfolding Networks for image restoration, named LoRun, harmonizing denoising objectives and adapting different denoising levels between stages with compressed memory usage for more efficient DUN.LoRun introduces a novel paradigm where a single pretrained base denoiser is shared across all stages, while lightweight, stage-specific LoRA adapters are injected into the PMMs to dynamically modulate denoising behavior according to the noise level at each unfolding step.This design decouples the core restoration capability from task-specific adaptation, enabling precise control over denoising intensity without duplicating full network parameters and achieving up to $N$ times parameter reduction for an $N$-stage DUN with on-par or better performance.Extensive experiments conducted on three IR tasks validate the efficiency of our method.
Related papers
- Subtractive Modulative Network with Learnable Periodic Activations [59.89799070130572]
We propose a novel, parameter-efficient Implicit Neural Representation architecture inspired by classical subtractive synthesis.<n>Our SMN achieves a PSNR of $40+$ dB on two image datasets, comparing favorably against state-of-the-art methods in terms of both reconstruction accuracy and parameter efficiency.
arXiv Detail & Related papers (2026-02-18T10:20:50Z) - Deep Lightweight Unrolled Network for High Dynamic Range Modulo Imaging [19.49437461280304]
Modulo-Imaging (MI) offers a promising alternative for expanding the dynamic dynamic range images by resetting the signal intensity when it reaches the intensity level.<n>We introduce the Scaling Equi term that facilitates self-tuning, thereby enabling the model to adapt to new images outside the original distribution.
arXiv Detail & Related papers (2026-01-18T18:22:38Z) - OTARo: Once Tuning for All Precisions toward Robust On-Device LLMs [21.55040910903597]
OTARo is a novel method that enables on-device Large Language Models to flexibly switch quantization precisions.<n>It achieves consistently strong and robust performance for all precisions.
arXiv Detail & Related papers (2025-11-17T08:56:27Z) - HAD: Hierarchical Asymmetric Distillation to Bridge Spatio-Temporal Gaps in Event-Based Object Tracking [80.07224739976911]
Event cameras offer exceptional temporal resolution and a range (modal)<n> RGB cameras excel at capturing rich texture with high resolution, whereas event cameras offer exceptional temporal resolution and a range (modal)
arXiv Detail & Related papers (2025-10-22T13:15:13Z) - Iterative Low-rank Network for Hyperspectral Image Denoising [16.26671997491784]
Hyperspectral image (HSI) denoising is a crucial preprocessing step for subsequent tasks.<n>It is generally challenging to adequately use such physical properties for effective denoising while preserving image details.<n>This paper introduces a novel iterative low-rank network (ILRNet) to address these challenges.
arXiv Detail & Related papers (2025-08-30T04:34:43Z) - ASMR: Activation-sharing Multi-resolution Coordinate Networks For Efficient Inference [6.005712471509875]
Coordinate network or implicit neural representation (INR) is a fast-emerging method for encoding natural signals.
We propose the Activation-Sharing Multi-Resolution (ASMR) coordinate network that combines multi-resolution coordinate decomposition with hierarchical modulations.
We show that ASMR can reduce the MAC of a vanilla SIREN model by up to 500x while achieving an even higher reconstruction quality than its SIREN baseline.
arXiv Detail & Related papers (2024-05-20T22:35:34Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution
Networks [82.18396309806577]
We propose a novel activation quantizer, referred to as Dynamic Dual Trainable Bounds (DDTB)
Our DDTB exhibits significant performance improvements in ultra-low precision.
For example, our DDTB achieves a 0.70dB PSNR increase on Urban100 benchmark when quantizing EDSR to 2-bit and scaling up output images to x4.
arXiv Detail & Related papers (2022-03-08T04:26:18Z) - Efficient Low-Latency Speech Enhancement with Mobile Audio Streaming
Networks [6.82469220191368]
We propose Mobile Audio Streaming Networks (MASnet) for efficient low-latency speech enhancement.
MASnet processes linear-scale spectrograms, transforming successive noisy frames into complex-valued ratio masks.
arXiv Detail & Related papers (2020-08-17T12:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.