A Structured Dictionary Perspective on Implicit Neural Representations
- URL: http://arxiv.org/abs/2112.01917v1
- Date: Fri, 3 Dec 2021 14:00:52 GMT
- Title: A Structured Dictionary Perspective on Implicit Neural Representations
- Authors: Gizem Y\"uce, Guillermo Ortiz-Jim\'enez, Beril Besbinar, Pascal
Frossard
- Abstract summary: We show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of the set of initial mapping frequencies.
We explore the inductive bias of INRs exploiting recent results about the empirical neural tangent kernel (NTK)
Our results permit to design and tune novel INR architectures, but can also be of interest for the wider deep learning theory community.
- Score: 47.35227614605095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Propelled by new designs that permit to circumvent the spectral bias,
implicit neural representations (INRs) have recently emerged as a promising
alternative to classical discretized representations of signals. Nevertheless,
despite their practical success, we still lack a proper theoretical
characterization of how INRs represent signals. In this work, we aim to fill
this gap, and we propose a novel unified perspective to theoretically analyse
INRs. Leveraging results from harmonic analysis and deep learning theory, we
show that most INR families are analogous to structured signal dictionaries
whose atoms are integer harmonics of the set of initial mapping frequencies.
This structure allows INRs to express signals with an exponentially increasing
frequency support using a number of parameters that only grows linearly with
depth. Afterwards, we explore the inductive bias of INRs exploiting recent
results about the empirical neural tangent kernel (NTK). Specifically, we show
that the eigenfunctions of the NTK can be seen as dictionary atoms whose inner
product with the target signal determines the final performance of their
reconstruction. In this regard, we reveal that meta-learning the initialization
has a reshaping effect of the NTK analogous to dictionary learning, building
dictionary atoms as a combination of the examples seen during meta-training.
Our results permit to design and tune novel INR architectures, but can also be
of interest for the wider deep learning theory community.
Related papers
- Joint Diffusion Processes as an Inductive Bias in Sheaf Neural Networks [14.224234978509026]
Sheaf Neural Networks (SNNs) naturally extend Graph Neural Networks (GNNs)
We propose two novel sheaf learning approaches that provide a more intuitive understanding of the involved structure maps.
In our evaluation, we show the limitations of the real-world benchmarks used so far on SNNs.
arXiv Detail & Related papers (2024-07-30T07:17:46Z) - A Sampling Theory Perspective on Activations for Implicit Neural
Representations [73.6637608397055]
Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities.
We conduct a comprehensive analysis of these activations from a sampling theory perspective.
Our investigation reveals that sinc activations, previously unused in conjunction with INRs, are theoretically optimal for signal encoding.
arXiv Detail & Related papers (2024-02-08T05:52:45Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Convolutional Dictionary Learning by End-To-End Training of Iterative
Neural Networks [3.6280929178575994]
In this work, we construct an INN which can be used as a supervised and physics-informed online convolutional dictionary learning algorithm.
We show that the proposed INN improves over two conventional model-agnostic training methods and yields competitive results also compared to a deep INN.
arXiv Detail & Related papers (2022-06-09T12:15:38Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.