Fast Unsupervised Tensor Restoration via Low-rank Deconvolution
- URL: http://arxiv.org/abs/2406.10679v1
- Date: Sat, 15 Jun 2024 16:04:49 GMT
- Title: Fast Unsupervised Tensor Restoration via Low-rank Deconvolution
- Authors: David Reixach, Josep Ramon Morros,
- Abstract summary: Low-rank Deconvolution (LRD) has appeared as a new multi-dimensional representation model that enjoys important efficiency and flexibility properties.
We ask ourselves if this analytical model can compete against Deep Learning (DL) frameworks like Deep Image Prior (DIP) or Blind-Spot Networks (BSN)
- Score: 0.09208007322096533
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Low-rank Deconvolution (LRD) has appeared as a new multi-dimensional representation model that enjoys important efficiency and flexibility properties. In this work we ask ourselves if this analytical model can compete against Deep Learning (DL) frameworks like Deep Image Prior (DIP) or Blind-Spot Networks (BSN) and other classical methods in the task of signal restoration. More specifically, we propose to extend LRD with differential regularization. This approach allows us to easily incorporate Total Variation (TV) and integral priors to the formulation leading to considerable performance tested on signal restoration tasks such image denoising and video enhancement, and at the same time benefiting from its small computational cost.
Related papers
- Boosting Image Restoration via Priors from Pre-trained Models [54.83907596825985]
We learn an additional lightweight module called Pre-Train-Guided Refinement Module (PTG-RM) to refine restoration results of a target restoration network with OSF.
PTG-RM effectively enhances restoration performance of various models across different tasks, including low-light enhancement, deraining, deblurring, and denoising.
arXiv Detail & Related papers (2024-03-11T15:11:57Z) - HIR-Diff: Unsupervised Hyperspectral Image Restoration Via Improved
Diffusion Models [38.74983301496911]
Hyperspectral image (HSI) restoration aims at recovering clean images from degraded observations.
Existing model-based methods have limitations in accurately modeling the complex image characteristics.
This paper proposes an unsupervised HSI restoration framework with pre-trained diffusion model (HIR-Diff)
arXiv Detail & Related papers (2024-02-24T17:15:05Z) - JoReS-Diff: Joint Retinex and Semantic Priors in Diffusion Model for Low-light Image Enhancement [69.6035373784027]
Low-light image enhancement (LLIE) has achieved promising performance by employing conditional diffusion models.
Previous methods may neglect the importance of a sufficient formulation of task-specific condition strategy.
We propose JoReS-Diff, a novel approach that incorporates Retinex- and semantic-based priors as the additional pre-processing condition.
arXiv Detail & Related papers (2023-12-20T08:05:57Z) - FRDiff : Feature Reuse for Universal Training-free Acceleration of Diffusion Models [16.940023904740585]
We introduce an advanced acceleration technique that leverages the temporal redundancy inherent in diffusion models.
Reusing feature maps with high temporal similarity opens up a new opportunity to save computation resources without compromising output quality.
arXiv Detail & Related papers (2023-12-06T14:24:26Z) - Multi-task Image Restoration Guided By Robust DINO Features [88.74005987908443]
We propose mboxtextbfDINO-IR, a multi-task image restoration approach leveraging robust features extracted from DINOv2.
We first propose a pixel-semantic fusion (PSF) module to dynamically fuse DINOV2's shallow features.
By formulating these modules into a unified deep model, we propose a DINO perception contrastive loss to constrain the model training.
arXiv Detail & Related papers (2023-12-04T06:59:55Z) - VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations [25.88881764546414]
VQ-NeRF is an efficient pipeline for enhancing implicit neural representations via vector quantization.
We present an innovative multi-scale NeRF sampling scheme that concurrently optimize the NeRF model at both compressed and original scales.
We incorporate a semantic loss function to improve the geometric fidelity and semantic coherence of our 3D reconstructions.
arXiv Detail & Related papers (2023-10-23T01:41:38Z) - Random Weight Factorization Improves the Training of Continuous Neural
Representations [1.911678487931003]
Continuous neural representations have emerged as a powerful and flexible alternative to classical discretized representations of signals.
We propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers.
We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate.
arXiv Detail & Related papers (2022-10-03T23:48:48Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - DeepRLS: A Recurrent Network Architecture with Least Squares Implicit
Layers for Non-blind Image Deconvolution [15.986942312624]
We study the problem of non-blind image deconvolution.
We propose a novel recurrent network architecture that leads to very competitive restoration results of high image quality.
arXiv Detail & Related papers (2021-12-10T13:16:51Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.