Implicit Neural Representations with Periodic Activation Functions
- URL: http://arxiv.org/abs/2006.09661v1
- Date: Wed, 17 Jun 2020 05:13:33 GMT
- Title: Implicit Neural Representations with Periodic Activation Functions
- Authors: Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B.
Lindell, Gordon Wetzstein
- Abstract summary: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm.
We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
We show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations, the Poisson equation, and the Helmholtz and wave equations.
- Score: 109.2353097792111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicitly defined, continuous, differentiable signal representations
parameterized by neural networks have emerged as a powerful paradigm, offering
many possible benefits over conventional representations. However, current
network architectures for such implicit neural representations are incapable of
modeling signals with fine detail, and fail to represent a signal's spatial and
temporal derivatives, despite the fact that these are essential to many
physical signals defined implicitly as the solution to partial differential
equations. We propose to leverage periodic activation functions for implicit
neural representations and demonstrate that these networks, dubbed sinusoidal
representation networks or Sirens, are ideally suited for representing complex
natural signals and their derivatives. We analyze Siren activation statistics
to propose a principled initialization scheme and demonstrate the
representation of images, wavefields, video, sound, and their derivatives.
Further, we show how Sirens can be leveraged to solve challenging boundary
value problems, such as particular Eikonal equations (yielding signed distance
functions), the Poisson equation, and the Helmholtz and wave equations. Lastly,
we combine Sirens with hypernetworks to learn priors over the space of Siren
functions.
Related papers
- Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent [8.347295051171525]
We show that gradient noise creates a systematic interplay of parameters $theta$ along the degenerate direction to a unique-independent fixed point $theta*$.
These points are referred to as the it noise equilibria because, at these points, noise contributions from different directions are balanced and aligned.
We show that the balance and alignment of gradient noise can serve as a novel alternative mechanism for explaining important phenomena such as progressive sharpening/flattening and representation formation within neural networks.
arXiv Detail & Related papers (2024-02-11T13:00:04Z) - A Sampling Theory Perspective on Activations for Implicit Neural
Representations [73.6637608397055]
Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities.
We conduct a comprehensive analysis of these activations from a sampling theory perspective.
Our investigation reveals that sinc activations, previously unused in conjunction with INRs, are theoretically optimal for signal encoding.
arXiv Detail & Related papers (2024-02-08T05:52:45Z) - Wave Physics-informed Matrix Factorizations [8.64018020390058]
In many applications that involve a signal propagating through physical media, the dynamics of the signal must satisfy constraints imposed by the wave equation.
Here we propose a matrix factorization technique that decomposes the dynamics signal into a sum sum.
We establish theoretical connections between wave learning and filtering theory in signal processing.
arXiv Detail & Related papers (2023-12-21T05:27:16Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Implicit Neural Representations and the Algebra of Complex Wavelets [36.311212480600794]
Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains.
By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples and spectral features of the signal that is not obvious in the usual discrete representation.
arXiv Detail & Related papers (2023-10-01T02:01:28Z) - Harmonic (Quantum) Neural Networks [10.31053131199922]
Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell's, Navier-Stokes equations, the heat and the wave equation.
Despite their ubiquity and relevance, there have been few attempts to incorporate inductive biases towards harmonic functions in machine learning contexts.
We show effective means of representing harmonic functions in neural networks and extend such results to quantum neural networks.
arXiv Detail & Related papers (2022-12-14T19:13:59Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory [110.99247009159726]
Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.
In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise.
arXiv Detail & Related papers (2020-06-08T17:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.