Understanding Sinusoidal Neural Networks
- URL: http://arxiv.org/abs/2212.01833v2
- Date: Mon, 11 Sep 2023 17:02:33 GMT
- Title: Understanding Sinusoidal Neural Networks
- Authors: Tiago Novello
- Abstract summary: We investigate the structure and representation capacity of sinusoidals - multilayer perceptron networks that use sine as the activation function.
These neural networks have become fundamental in representing common signals in computer graphics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we investigate the structure and representation capacity of
sinusoidal MLPs - multilayer perceptron networks that use sine as the
activation function. These neural networks (known as neural fields) have become
fundamental in representing common signals in computer graphics, such as
images, signed distance functions, and radiance fields. This success can be
primarily attributed to two key properties of sinusoidal MLPs: smoothness and
compactness. These functions are smooth because they arise from the composition
of affine maps with the sine function. This work provides theoretical results
to justify the compactness property of sinusoidal MLPs and provides control
mechanisms in the definition and training of these networks.
We propose to study a sinusoidal MLP by expanding it as a harmonic sum.
First, we observe that its first layer can be seen as a harmonic dictionary,
which we call the input sinusoidal neurons. Then, a hidden layer combines this
dictionary using an affine map and modulates the outputs using the sine, this
results in a special dictionary of sinusoidal neurons. We prove that each of
these sinusoidal neurons expands as a harmonic sum producing a large number of
new frequencies expressed as integer linear combinations of the input
frequencies. Thus, each hidden neuron produces the same frequencies, and the
corresponding amplitudes are completely determined by the hidden affine map. We
also provide an upper bound and a way of sorting these amplitudes that can
control the resulting approximation, allowing us to truncate the corresponding
series. Finally, we present applications for training and initialization of
sinusoidal MLPs. Additionally, we show that if the input neurons are periodic,
then the entire network will be periodic with the same period. We relate these
periodic networks with the Fourier series representation.
Related papers
- Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations [54.17275171325324]
We present a counterexample to the Linear Representation Hypothesis (LRH)
When trained to repeat an input token sequence, neural networks learn to represent the token at each position with a particular order of magnitude, rather than a direction.
These findings strongly indicate that interpretability research should not be confined to the LRH.
arXiv Detail & Related papers (2024-08-20T15:04:37Z) - Taming the Frequency Factory of Sinusoidal Networks [0.9968037829925942]
This work investigates the structure and representation capacity of $sinusoidal$s, which have recently shown promising results in encoding low-dimensional signals.
We use this novel $identity$ to initialize the input neurons which work as a sampling in the signal spectrum.
We also note that each hidden neuron produces the same frequencies with amplitudes completely determined by the hidden weights.
arXiv Detail & Related papers (2024-07-30T18:24:46Z) - Generative Kaleidoscopic Networks [2.321684718906739]
We utilize this property of neural networks to design a dataset kaleidoscope, termed as Generative Kaleidoscopic Networks'
We observed this phenomenon to various degrees for the other deep learning architectures like CNNs, Transformers & U-Nets.
arXiv Detail & Related papers (2024-02-19T02:48:40Z) - Implicit Neural Representation of Tileable Material Textures [1.1203075575217447]
We explore sinusoidal neural networks to represent periodic tileable textures.
We prove that the compositions of sinusoidal layers generate only integer frequencies with period $P$.
Our proposed neural implicit representation is compact and enables efficient reconstruction of high-resolution textures.
arXiv Detail & Related papers (2024-02-03T16:44:25Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Parallel Hybrid Networks: an interplay between quantum and classical
neural networks [0.0]
We introduce a new, interpretable class of hybrid quantum neural networks that pass the inputs of the dataset in parallel.
We demonstrate this claim on two synthetic datasets sampled from periodic distributions with added protrusions as noise.
arXiv Detail & Related papers (2023-03-06T15:45:28Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Estimating Multiplicative Relations in Neural Networks [0.0]
We will use properties of logarithmic functions to propose a pair of activation functions which can translate products into linear expression and learn using backpropagation.
We will try to generalize this approach for some complex arithmetic functions and test the accuracy on a disjoint distribution with the training set.
arXiv Detail & Related papers (2020-10-28T14:28:24Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.