Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks
- URL: http://arxiv.org/abs/2407.21121v3
- Date: Fri, 04 Apr 2025 02:54:01 GMT
- Title: Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks
- Authors: Tiago Novello, Diana Aldana, Andre Araujo, Luiz Velho,
- Abstract summary: We introduce a theoretical framework that explains the capacity property of sinusoidal networks.<n>We show how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies.<n>Our method, referred to as TUNER, greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions.
- Score: 1.5124439914522694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sinusoidal neural networks have been shown effective as implicit neural representations (INRs) of low-dimensional signals, due to their smoothness and high representation capacity. However, initializing and training them remain empirical tasks which lack on deeper understanding to guide the learning process. To fill this gap, our work introduces a theoretical framework that explains the capacity property of sinusoidal networks and offers robust control mechanisms for initialization and training. Our analysis is based on a novel amplitude-phase expansion of the sinusoidal multilayer perceptron, showing how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies. This relationship can be directly used to initialize the input neurons, as a form of spectral sampling, and to bound the network's spectrum while training. Our method, referred to as TUNER (TUNing sinusoidal nEtwoRks), greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions, while preventing overfitting.
Related papers
- FreSh: Frequency Shifting for Accelerated Neural Representation Learning [11.175745750843484]
Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs)
Low-frequency details are known to exhibit a low-frequency bias, limiting their ability to capture high-frequency details accurately.
We propose frequency shifting (or FreSh) to align the frequency spectrum of the initial output with that of the target signal.
arXiv Detail & Related papers (2024-10-07T14:05:57Z) - Deep Oscillatory Neural Network [4.2586023009901215]
We propose a brain-inspired deep neural network model known as the Deepy Neural Network (DONN)
With this motivation, the DONN is designed to have oscillatory internal dynamics.
The performance of the proposed models is either comparable or superior to published results on the same data sets.
arXiv Detail & Related papers (2024-05-06T06:17:16Z) - Sliding down the stairs: how correlated latent variables accelerate learning with neural networks [8.107431208836426]
We show that correlations between latent variables along directions encoded in different input cumulants speed up learning from higher-order correlations.
Our results are confirmed in simulations of two-layer neural networks.
arXiv Detail & Related papers (2024-04-12T17:01:25Z) - Generative Kaleidoscopic Networks [2.321684718906739]
We utilize this property of neural networks to design a dataset kaleidoscope, termed as Generative Kaleidoscopic Networks'
We observed this phenomenon to various degrees for the other deep learning architectures like CNNs, Transformers & U-Nets.
arXiv Detail & Related papers (2024-02-19T02:48:40Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Parallel Hybrid Networks: an interplay between quantum and classical
neural networks [0.0]
We introduce a new, interpretable class of hybrid quantum neural networks that pass the inputs of the dataset in parallel.
We demonstrate this claim on two synthetic datasets sampled from periodic distributions with added protrusions as noise.
arXiv Detail & Related papers (2023-03-06T15:45:28Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Understanding Sinusoidal Neural Networks [0.0]
We investigate the structure and representation capacity of sinusoidals - multilayer perceptron networks that use sine as the activation function.
These neural networks have become fundamental in representing common signals in computer graphics.
arXiv Detail & Related papers (2022-12-04T14:50:22Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network
Pruning [50.232218214751455]
optimal network pruning is a non-trivial task which mathematically is an NP-hard problem.
In this paper, we investigate the Magnitude-Based Pruning (MBP) scheme and analyze it from a novel perspective.
We also propose a novel two-stage pruning approach, where one stage is to obtain the topological structure of the pruned network and the other stage is to retrain the pruned network to recover the capacity.
arXiv Detail & Related papers (2022-01-30T03:42:36Z) - Conditioning Trick for Training Stable GANs [70.15099665710336]
We propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training.
We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomposition.
arXiv Detail & Related papers (2020-10-12T16:50:22Z) - The Surprising Simplicity of the Early-Time Learning Dynamics of Neural
Networks [43.860358308049044]
In work, we show that these common perceptions can be completely false in the early phase of learning.
We argue that this surprising simplicity can persist in networks with more layers with convolutional architecture.
arXiv Detail & Related papers (2020-06-25T17:42:49Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.