Implicit Neural Representations and the Algebra of Complex Wavelets
- URL: http://arxiv.org/abs/2310.00545v1
- Date: Sun, 1 Oct 2023 02:01:28 GMT
- Title: Implicit Neural Representations and the Algebra of Complex Wavelets
- Authors: T. Mitchell Roddenberry, Vishwanath Saragadam, Maarten V. de Hoop,
Richard G. Baraniuk
- Abstract summary: Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains.
By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples and spectral features of the signal that is not obvious in the usual discrete representation.
- Score: 36.311212480600794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit neural representations (INRs) have arisen as useful methods for
representing signals on Euclidean domains. By parameterizing an image as a
multilayer perceptron (MLP) on Euclidean space, INRs effectively represent
signals in a way that couples spatial and spectral features of the signal that
is not obvious in the usual discrete representation, paving the way for
continuous signal processing and machine learning approaches that were not
previously possible. Although INRs using sinusoidal activation functions have
been studied in terms of Fourier theory, recent works have shown the advantage
of using wavelets instead of sinusoids as activation functions, due to their
ability to simultaneously localize in both frequency and space. In this work,
we approach such INRs and demonstrate how they resolve high-frequency features
of signals from coarse approximations done in the first layer of the MLP. This
leads to multiple prescriptions for the design of INR architectures, including
the use of complex wavelets, decoupling of low and band-pass approximations,
and initialization schemes based on the singularities of the desired signal.
Related papers
- FreSh: Frequency Shifting for Accelerated Neural Representation Learning [11.175745750843484]
Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs)
Low-frequency details are known to exhibit a low-frequency bias, limiting their ability to capture high-frequency details accurately.
We propose frequency shifting (or FreSh) to align the frequency spectrum of the initial output with that of the target signal.
arXiv Detail & Related papers (2024-10-07T14:05:57Z) - Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs [26.901819636977912]
We propose a method that integrates multi-plane representation with a coordinate-based network known for strong bias toward low-frequency signals.
We demonstrate that our proposed method outperforms baseline models for both static and dynamic NeRFs with sparse inputs.
arXiv Detail & Related papers (2024-05-13T15:42:46Z) - A Sampling Theory Perspective on Activations for Implicit Neural
Representations [73.6637608397055]
Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities.
We conduct a comprehensive analysis of these activations from a sampling theory perspective.
Our investigation reveals that sinc activations, previously unused in conjunction with INRs, are theoretically optimal for signal encoding.
arXiv Detail & Related papers (2024-02-08T05:52:45Z) - DINER: Disorder-Invariant Implicit Neural Representation [33.10256713209207]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2022-11-15T03:34:24Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Multi-Head ReLU Implicit Neural Representation Networks [3.04585143845864]
A novel multi-head multi-layer perceptron (MLP) structure is presented for implicit neural representation (INR)
We show that the proposed model does not suffer from the special bias of conventional ReLU networks and has superior capabilities.
arXiv Detail & Related papers (2021-10-07T13:27:35Z) - Modulated Periodic Activations for Generalizable Local Functional
Representations [113.64179351957888]
We present a new representation that generalizes to multiple instances and achieves state-of-the-art fidelity.
Our approach produces general functional representations of images, videos and shapes, and achieves higher reconstruction quality than prior works that are optimized for a single signal.
arXiv Detail & Related papers (2021-04-08T17:59:04Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Implicit Neural Representations with Periodic Activation Functions [109.2353097792111]
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm.
We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
We show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations, the Poisson equation, and the Helmholtz and wave equations.
arXiv Detail & Related papers (2020-06-17T05:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.