Implicit Neural Representations with Periodic Activation Functions
- URL: http://arxiv.org/abs/2006.09661v1
- Date: Wed, 17 Jun 2020 05:13:33 GMT
- Title: Implicit Neural Representations with Periodic Activation Functions
- Authors: Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B.
Lindell, Gordon Wetzstein
- Abstract summary: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm.
We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
We show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations, the Poisson equation, and the Helmholtz and wave equations.
- Score: 109.2353097792111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicitly defined, continuous, differentiable signal representations
parameterized by neural networks have emerged as a powerful paradigm, offering
many possible benefits over conventional representations. However, current
network architectures for such implicit neural representations are incapable of
modeling signals with fine detail, and fail to represent a signal's spatial and
temporal derivatives, despite the fact that these are essential to many
physical signals defined implicitly as the solution to partial differential
equations. We propose to leverage periodic activation functions for implicit
neural representations and demonstrate that these networks, dubbed sinusoidal
representation networks or Sirens, are ideally suited for representing complex
natural signals and their derivatives. We analyze Siren activation statistics
to propose a principled initialization scheme and demonstrate the
representation of images, wavefields, video, sound, and their derivatives.
Further, we show how Sirens can be leveraged to solve challenging boundary
value problems, such as particular Eikonal equations (yielding signed distance
functions), the Poisson equation, and the Helmholtz and wave equations. Lastly,
we combine Sirens with hypernetworks to learn priors over the space of Siren
functions.
Related papers
- A Sampling Theory Perspective on Activations for Implicit Neural
Representations [73.6637608397055]
Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities.
We conduct a comprehensive analysis of these activations from a sampling theory perspective.
Our investigation reveals that sinc activations, previously unused in conjunction with INRs, are theoretically optimal for signal encoding.
arXiv Detail & Related papers (2024-02-08T05:52:45Z) - Wave Physics-informed Matrix Factorizations [8.64018020390058]
In many applications that involve a signal propagating through physical media, the dynamics of the signal must satisfy constraints imposed by the wave equation.
Here we propose a matrix factorization technique that decomposes the dynamics signal into a sum sum.
We establish theoretical connections between wave learning and filtering theory in signal processing.
arXiv Detail & Related papers (2023-12-21T05:27:16Z) - INCODE: Implicit Neural Conditioning with Prior Knowledge Embeddings [4.639495398851869]
Implicit Neural Representations (INRs) have revolutionized signal representation by leveraging neural networks to provide continuous and smooth representations of complex data.
We introduce INCODE, a novel approach that enhances the control of the sinusoidal-based activation function in INRs using deep prior knowledge.
Our approach not only excels in representation, but also extends its prowess to tackle complex tasks such as audio, image, and 3D shape reconstructions.
arXiv Detail & Related papers (2023-10-28T23:16:49Z) - Discrete, compositional, and symbolic representations through attractor
dynamics [61.58042831010077]
We show that imposing structure in the symbolic space can produce compositionality in the attractor-supported representation space of rich sensory inputs.
We argue that our model exhibits the process of an information bottleneck that is thought to play a role in conscious experience.
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Implicit Neural Representations and the Algebra of Complex Wavelets [36.311212480600794]
Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains.
By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples and spectral features of the signal that is not obvious in the usual discrete representation.
arXiv Detail & Related papers (2023-10-01T02:01:28Z) - Harmonic (Quantum) Neural Networks [10.31053131199922]
Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell's, Navier-Stokes equations, the heat and the wave equation.
Despite their ubiquity and relevance, there have been few attempts to incorporate inductive biases towards harmonic functions in machine learning contexts.
We show effective means of representing harmonic functions in neural networks and extend such results to quantum neural networks.
arXiv Detail & Related papers (2022-12-14T19:13:59Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory [110.99247009159726]
Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.
In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise.
arXiv Detail & Related papers (2020-06-08T17:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.