Revisiting Implicit Neural Representations in Low-Level Vision
- URL: http://arxiv.org/abs/2304.10250v1
- Date: Thu, 20 Apr 2023 12:19:27 GMT
- Title: Revisiting Implicit Neural Representations in Low-Level Vision
- Authors: Wentian Xu and Jianbo Jiao
- Abstract summary: Implicit Neural Representation (INR) has been emerging in computer vision in recent years.
We are interested in its effectiveness in low-level vision problems such as image restoration.
In this work, we revisit INR and investigate its application in low-level image restoration tasks.
- Score: 20.3578908524788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representation (INR) has been emerging in computer vision in
recent years. It has been shown to be effective in parameterising continuous
signals such as dense 3D models from discrete image data, e.g. the neural
radius field (NeRF). However, INR is under-explored in 2D image processing
tasks. Considering the basic definition and the structure of INR, we are
interested in its effectiveness in low-level vision problems such as image
restoration. In this work, we revisit INR and investigate its application in
low-level image restoration tasks including image denoising, super-resolution,
inpainting, and deblurring. Extensive experimental evaluations suggest the
superior performance of INR in several low-level vision tasks with limited
resources, outperforming its counterparts by over 2dB. Code and models are
available at https://github.com/WenTXuL/LINR
Related papers
- Single-Layer Learnable Activation for Implicit Neural Representation (SL$^{2}$A-INR) [6.572456394600755]
Implicit Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.
We propose SL$2$A-INR with a single-layer learnable activation function, prompting the effectiveness of traditional ReLU-baseds.
Our method performs superior across diverse tasks, including image representation, 3D shape reconstruction, single image super-resolution, CT reconstruction, and novel view.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Deep Learning on Implicit Neural Representations of Shapes [14.596732196310978]
Implicit Neural Representations (INRs) have emerged as a powerful tool to encode continuously a variety of different signals.
In this paper, we propose inr2vec, a framework that can compute a compact latent representation for an input INR in a single inference pass.
We verify that inr2vec can embed effectively the 3D shapes represented by the input INRs and show how the produced embeddings can be fed into deep learning pipelines.
arXiv Detail & Related papers (2023-02-10T18:55:49Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Rethinking Implicit Neural Representations for Vision Learners [27.888990902915626]
Implicit Neural Representations are powerful to parameterize continuous signals in computer vision.
Existing INRs methods suffer from two problems: 1) narrow theoretical definitions of INRs are inapplicable to high-level tasks; 2) lack of representation capabilities to deep networks.
We propose an innovative Implicit Neural Representation Network (INRN), which is the first study of INRs to tackle both low-level and high-level tasks.
arXiv Detail & Related papers (2022-11-22T06:23:04Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - Convolutional versus Self-Organized Operational Neural Networks for
Real-World Blind Image Denoising [25.31981236136533]
We tackle the real-world blind image denoising problem by employing, for the first time, a deep Self-ONN.
Deep Self-ONNs consistently achieve superior results with performance gains of up to 1.76dB in PSNR.
arXiv Detail & Related papers (2021-03-04T14:49:17Z) - Attention Based Real Image Restoration [48.933507352496726]
Deep convolutional neural networks perform better on images containing synthetic degradations.
This paper proposes a novel single-stage blind real image restoration network (R$2$Net)
arXiv Detail & Related papers (2020-04-26T04:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.