Rethinking Implicit Neural Representations for Vision Learners
- URL: http://arxiv.org/abs/2211.12040v2
- Date: Wed, 23 Nov 2022 01:56:01 GMT
- Title: Rethinking Implicit Neural Representations for Vision Learners
- Authors: Yiran Song, Qianyu Zhou, Lizhuang Ma
- Abstract summary: Implicit Neural Representations are powerful to parameterize continuous signals in computer vision.
Existing INRs methods suffer from two problems: 1) narrow theoretical definitions of INRs are inapplicable to high-level tasks; 2) lack of representation capabilities to deep networks.
We propose an innovative Implicit Neural Representation Network (INRN), which is the first study of INRs to tackle both low-level and high-level tasks.
- Score: 27.888990902915626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations (INRs) are powerful to parameterize
continuous signals in computer vision. However, almost all INRs methods are
limited to low-level tasks, e.g., image/video compression, super-resolution,
and image generation. The questions on how to explore INRs to high-level tasks
and deep networks are still under-explored. Existing INRs methods suffer from
two problems: 1) narrow theoretical definitions of INRs are inapplicable to
high-level tasks; 2) lack of representation capabilities to deep networks.
Motivated by the above facts, we reformulate the definitions of INRs from a
novel perspective and propose an innovative Implicit Neural Representation
Network (INRN), which is the first study of INRs to tackle both low-level and
high-level tasks. Specifically, we present three key designs for basic blocks
in INRN along with two different stacking ways and corresponding loss
functions. Extensive experiments with analysis on both low-level tasks (image
fitting) and high-level vision tasks (image classification, object detection,
instance segmentation) demonstrate the effectiveness of the proposed method.
Related papers
- Single-Layer Learnable Activation for Implicit Neural Representation (SL$^{2}$A-INR) [6.572456394600755]
Implicit Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.
We propose SL$2$A-INR with a single-layer learnable activation function, prompting the effectiveness of traditional ReLU-baseds.
Our method performs superior across diverse tasks, including image representation, 3D shape reconstruction, single image super-resolution, CT reconstruction, and novel view.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - Joint Admission Control and Resource Allocation of Virtual Network Embedding via Hierarchical Deep Reinforcement Learning [69.00997996453842]
We propose a deep Reinforcement Learning approach to learn a joint Admission Control and Resource Allocation policy for virtual network embedding.
We show that HRL-ACRA outperforms state-of-the-art baselines in terms of both the acceptance ratio and long-term average revenue.
arXiv Detail & Related papers (2024-06-25T07:42:30Z) - One-stage Low-resolution Text Recognition with High-resolution Knowledge
Transfer [53.02254290682613]
Current solutions for low-resolution text recognition typically rely on a two-stage pipeline.
We propose an efficient and effective knowledge distillation framework to achieve multi-level knowledge transfer.
Experiments show that the proposed one-stage pipeline significantly outperforms super-resolution based two-stage frameworks.
arXiv Detail & Related papers (2023-08-05T02:33:45Z) - Revisiting Implicit Neural Representations in Low-Level Vision [20.3578908524788]
Implicit Neural Representation (INR) has been emerging in computer vision in recent years.
We are interested in its effectiveness in low-level vision problems such as image restoration.
In this work, we revisit INR and investigate its application in low-level image restoration tasks.
arXiv Detail & Related papers (2023-04-20T12:19:27Z) - Deep Learning on Implicit Neural Representations of Shapes [14.596732196310978]
Implicit Neural Representations (INRs) have emerged as a powerful tool to encode continuously a variety of different signals.
In this paper, we propose inr2vec, a framework that can compute a compact latent representation for an input INR in a single inference pass.
We verify that inr2vec can embed effectively the 3D shapes represented by the input INRs and show how the produced embeddings can be fed into deep learning pipelines.
arXiv Detail & Related papers (2023-02-10T18:55:49Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Disentangled High Quality Salient Object Detection [8.416690566816305]
We propose a novel deep learning framework for high-resolution salient object detection (SOD)
It disentangles the task into a low-resolution saliency classification network (LRSCN) and a high-resolution refinement network (HRRN)
arXiv Detail & Related papers (2021-08-08T02:14:15Z) - Hierarchical Deep CNN Feature Set-Based Representation Learning for
Robust Cross-Resolution Face Recognition [59.29808528182607]
Cross-resolution face recognition (CRFR) is important in intelligent surveillance and biometric forensics.
Existing shallow learning-based and deep learning-based methods focus on mapping the HR-LR face pairs into a joint feature space.
In this study, we desire to fully exploit the multi-level deep convolutional neural network (CNN) feature set for robust CRFR.
arXiv Detail & Related papers (2021-03-25T14:03:42Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.