ShiftLUT: Spatial Shift Enhanced Look-Up Tables for Efficient Image Restoration
- URL: http://arxiv.org/abs/2603.00906v2
- Date: Tue, 03 Mar 2026 17:01:47 GMT
- Title: ShiftLUT: Spatial Shift Enhanced Look-Up Tables for Efficient Image Restoration
- Authors: Xiaolong Zeng, Yitong Yu, Shiyao Xiong, Jinhua Hao, Ming Sun, Chao Zhou, Bin Wang,
- Abstract summary: ShiftLUT is a novel framework that attains the largest receptive field among all LUT-based methods while maintaining high efficiency.<n>Compared to the previous state-of-the-art method TinyLUT, ShiftLUT achieves a 3.8$times$ larger receptive field and improves an average PSNR by over 0.21 dB.
- Score: 8.845117852325997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Look-Up Table based methods have emerged as a promising direction for efficient image restoration tasks. Recent LUT-based methods focus on improving their performance by expanding the receptive field. However, they inevitably introduce extra computational and storage overhead, which hinders their deployment in edge devices. To address this issue, we propose ShiftLUT, a novel framework that attains the largest receptive field among all LUT-based methods while maintaining high efficiency. Our key insight lies in three complementary components. First, Learnable Spatial Shift module (LSS) is introduced to expand the receptive field by applying learnable, channel-wise spatial offsets on feature maps. Second, we propose an asymmetric dual-branch architecture that allocates more computation to the information-dense branch, substantially reducing inference latency without compromising restoration quality. Finally, we incorporate a feature-level LUT compression strategy called Error-bounded Adaptive Sampling (EAS) to minimize the storage overhead. Compared to the previous state-of-the-art method TinyLUT, ShiftLUT achieves a 3.8$\times$ larger receptive field and improves an average PSNR by over 0.21 dB across multiple standard benchmarks, while maintaining a small storage size and inference time. The code is available at: https://github.com/Sailor-t/ShiftLUT .
Related papers
- LoR-LUT: Learning Compact 3D Lookup Tables via Low-Rank Residuals [8.420640298306237]
LoR-LUT is a unified low-rank formulation for compact and interpretable 3D lookup table (LUT) generation.<n>LoR-LUT is trained on the MIT-Adobe FiveK dataset.<n> interactive visualization tool, termed LoR-LUT Viewer, transforms an input image into the LUT-adjusted output image.
arXiv Detail & Related papers (2026-02-26T04:28:35Z) - Unleashing Degradation-Carrying Features in Symmetric U-Net: Simpler and Stronger Baselines for All-in-One Image Restoration [52.82397287366076]
All-in-one image restoration aims to handle diverse degradations (e.g., noise, blur, adverse weather) within a unified framework.<n>In this work, we reveal a critical insight: well-crafted feature extraction inherently encodes degradation-carrying information.<n>Our symmetric design preserves intrinsic degradation signals robustly, rendering simple additive fusion in skip connections.
arXiv Detail & Related papers (2025-12-11T12:20:31Z) - Lightweight and Fast Real-time Image Enhancement via Decomposition of the Spatial-aware Lookup Tables [22.15777751379876]
Image enhancement methods based on 3D lookup tables (3D LUTs) efficiently reduce both model size and runtime.<n>However, the 3D LUT methods have a limitation due to their lack of spatial information.<n>We propose a method for generating image-adaptive LUTs by focusing on the redundant parts of the tables.
arXiv Detail & Related papers (2025-08-22T06:28:24Z) - FFT-based Dynamic Subspace Selection for Low-Rank Adaptive Optimization of Large Language Models [49.397861654088636]
We propose a two-step procedure to approximate SVD/QR-based gradient projections into lower-dimensional spaces.<n>We show that our strategy achieves faster runtime and reduced memory usage by up to $25%$ across different model sizes.
arXiv Detail & Related papers (2025-05-23T14:37:00Z) - AutoLUT: LUT-Based Image Super-Resolution with Automatic Sampling and Adaptive Residual Learning [39.17438080141985]
We introduce two plug-and-play modules designed to capture and leverage pixel information effectively in Look-Up Table (LUT) based super-resolution networks.<n>Our method achieves significant performance improvements on both MuLUT and SPF-LUT while maintaining similar storage sizes.
arXiv Detail & Related papers (2025-03-03T14:09:36Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.<n>Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement [29.675650285351768]
Machine unlearning (MU) has emerged to enhance the privacy and trustworthiness of deep neural networks.
Approximate MU is a practical method for large-scale models.
We propose a fast-slow parameter update strategy to implicitly approximate the up-to-date salient unlearning direction.
arXiv Detail & Related papers (2024-09-29T15:17:33Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - AugUndo: Scaling Up Augmentations for Monocular Depth Completion and Estimation [51.143540967290114]
We propose a method that unlocks a wide range of previously-infeasible geometric augmentations for unsupervised depth computation and estimation.
This is achieved by reversing, or undo''-ing, geometric transformations to the coordinates of the output depth, warping the depth map back to the original reference frame.
arXiv Detail & Related papers (2023-10-15T05:15:45Z) - Toward DNN of LUTs: Learning Efficient Image Restoration with Multiple
Look-Up Tables [47.15181829317732]
High-definition screens on edge devices stimulate a strong demand for efficient image restoration algorithms.
The size of a single look-up table grows exponentially with the increase of its indexing capacity.
We propose a universal method to construct multiple LUTs like a neural network, termed MuLUT.
arXiv Detail & Related papers (2023-03-25T16:00:33Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.