Disorder-invariant Implicit Neural Representation
- URL: http://arxiv.org/abs/2304.00837v1
- Date: Mon, 3 Apr 2023 09:28:48 GMT
- Title: Disorder-invariant Implicit Neural Representation
- Authors: Hao Zhu, Shaowen Xie, Zhen Liu, Fengyi Liu, Qi Zhang, You Zhou, Yi
Lin, Zhan Ma, Xun Cao
- Abstract summary: Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
- Score: 32.510321385245774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representation (INR) characterizes the attributes of a signal
as a function of corresponding coordinates which emerges as a sharp weapon for
solving inverse problems. However, the expressive power of INR is limited by
the spectral bias in the network training. In this paper, we find that such a
frequency-related problem could be greatly solved by re-arranging the
coordinates of the input signal, for which we propose the disorder-invariant
implicit neural representation (DINER) by augmenting a hash-table to a
traditional INR backbone. Given discrete signals sharing the same histogram of
attributes and different arrangement orders, the hash-table could project the
coordinates into the same distribution for which the mapped signal can be
better modeled using the subsequent INR network, leading to significantly
alleviated spectral bias. Furthermore, the expressive power of the DINER is
determined by the width of the hash-table. Different width corresponds to
different geometrical elements in the attribute space, \textit{e.g.}, 1D curve,
2D curved-plane and 3D curved-volume when the width is set as $1$, $2$ and $3$,
respectively. More covered areas of the geometrical elements result in stronger
expressive power. Experiments not only reveal the generalization of the DINER
for different INR backbones (MLP vs. SIREN) and various tasks (image/video
representation, phase retrieval, refractive index recovery, and neural radiance
field optimization) but also show the superiority over the state-of-the-art
algorithms both in quality and speed. \textit{Project page:}
\url{https://ezio77.github.io/DINER-website/}
Related papers
- FINER++: Building a Family of Variable-periodic Functions for Activating Implicit Neural Representation [39.116375158815515]
Implicit Neural Representation (INR) is causing a revolution in the field of signal processing.
INR techniques suffer from the "frequency"-specified spectral bias and capacity-convergence gap.
We propose the FINER++ framework by extending existing periodic/non-periodic activation functions to variable-periodic ones.
arXiv Detail & Related papers (2024-07-28T09:24:57Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Locality-Aware Generalizable Implicit Neural Representation [54.93702310461174]
Generalizable implicit neural representation (INR) enables a single continuous function to represent multiple data instances.
We propose a novel framework for generalizable INR that combines a transformer encoder with a locality-aware INR decoder.
Our framework significantly outperforms previous generalizable INRs and validates the usefulness of the locality-aware latents for downstream tasks.
arXiv Detail & Related papers (2023-10-09T11:26:58Z) - NeuRBF: A Neural Fields Representation with Adaptive Radial Basis
Functions [93.02515761070201]
We present a novel type of neural fields that uses general radial bases for signal representation.
Our method builds upon general radial bases with flexible kernel position and shape, which have higher spatial adaptivity and can more closely fit target signals.
When applied to neural radiance field reconstruction, our method achieves state-of-the-art rendering quality, with small model size and comparable training speed.
arXiv Detail & Related papers (2023-09-27T06:32:05Z) - SPDER: Semiperiodic Damping-Enabled Object Representation [7.4297019016687535]
We present a neural network architecture designed to naturally learn a positional embedding.
The proposed architecture, SPDER, is a simple that uses an activation function composed of a sinusoidal multiplied by a sublinear function.
Our results indicate that SPDERs speed up training by 10x and converge to losses 1,500-50,000x lower than that of the state-of-the-art for image representation.
arXiv Detail & Related papers (2023-06-27T06:49:40Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - DINER: Disorder-Invariant Implicit Neural Representation [33.10256713209207]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2022-11-15T03:34:24Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.