DINER: Disorder-Invariant Implicit Neural Representation
- URL: http://arxiv.org/abs/2211.07871v1
- Date: Tue, 15 Nov 2022 03:34:24 GMT
- Title: DINER: Disorder-Invariant Implicit Neural Representation
- Authors: Shaowen Xie, Hao Zhu, Zhen Liu, Qi Zhang, You Zhou, Xun Cao, Zhan Ma
- Abstract summary: Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
- Score: 33.10256713209207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representation (INR) characterizes the attributes of a signal
as a function of corresponding coordinates which emerges as a sharp weapon for
solving inverse problems. However, the capacity of INR is limited by the
spectral bias in the network training. In this paper, we find that such a
frequency-related problem could be largely solved by re-arranging the
coordinates of the input signal, for which we propose the disorder-invariant
implicit neural representation (DINER) by augmenting a hash-table to a
traditional INR backbone. Given discrete signals sharing the same histogram of
attributes and different arrangement orders, the hash-table could project the
coordinates into the same distribution for which the mapped signal can be
better modeled using the subsequent INR network, leading to significantly
alleviated spectral bias. Experiments not only reveal the generalization of the
DINER for different INR backbones (MLP vs. SIREN) and various tasks
(image/video representation, phase retrieval, and refractive index recovery)
but also show the superiority over the state-of-the-art algorithms both in
quality and speed.
Related papers
- Single-Layer Learnable Activation for Implicit Neural Representation (SL$^{2}$A-INR) [6.572456394600755]
Implicit Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.
We propose SL$2$A-INR with a single-layer learnable activation function, prompting the effectiveness of traditional ReLU-baseds.
Our method performs superior across diverse tasks, including image representation, 3D shape reconstruction, single image super-resolution, CT reconstruction, and novel view.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - Learning Transferable Features for Implicit Neural Representations [37.12083836826336]
Implicit neural representations (INRs) have demonstrated success in a variety of applications, including inverse problems and neural rendering.
We introduce a new INR training framework, STRAINER, that learns transferrable features for fitting INRs to new signals.
We evaluate STRAINER on multiple in-domain and out-of-domain signal fitting tasks and inverse problems.
arXiv Detail & Related papers (2024-09-15T00:53:44Z) - Conv-INR: Convolutional Implicit Neural Representation for Multimodal Visual Signals [2.7195102129095003]
Implicit neural representation (INR) has recently emerged as a promising paradigm for signal representations.
This paper proposes Conv-INR, the first INR model fully based on convolution.
arXiv Detail & Related papers (2024-06-06T16:52:42Z) - Locality-Aware Generalizable Implicit Neural Representation [54.93702310461174]
Generalizable implicit neural representation (INR) enables a single continuous function to represent multiple data instances.
We propose a novel framework for generalizable INR that combines a transformer encoder with a locality-aware INR decoder.
Our framework significantly outperforms previous generalizable INRs and validates the usefulness of the locality-aware latents for downstream tasks.
arXiv Detail & Related papers (2023-10-09T11:26:58Z) - Disorder-invariant Implicit Neural Representation [32.510321385245774]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2023-04-03T09:28:48Z) - Regularize implicit neural representation by itself [48.194276790352006]
This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR)
The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix.
The paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity.
arXiv Detail & Related papers (2023-03-27T04:11:08Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.