Improving Accuracy and Efficiency of Implicit Neural Representations: Making SIREN a WINNER
- URL: http://arxiv.org/abs/2509.12980v1
- Date: Tue, 16 Sep 2025 11:41:13 GMT
- Title: Improving Accuracy and Efficiency of Implicit Neural Representations: Making SIREN a WINNER
- Authors: Hemanth Chandravamsi, Dhanush V. Shenoy, Steven H. Frankel,
- Abstract summary: We identify and address a fundamental limitation of sinusoidal representation networks (SIRENs)<n>In extreme cases, when the network's frequency support misaligns with the target spectrum, a'spectral bottleneck' is observed.<n>We propose WINNER - Weight Initialization with Noise for Neural Representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We identify and address a fundamental limitation of sinusoidal representation networks (SIRENs), a class of implicit neural representations. SIRENs Sitzmann et al. (2020), when not initialized appropriately, can struggle at fitting signals that fall outside their frequency support. In extreme cases, when the network's frequency support misaligns with the target spectrum, a 'spectral bottleneck' phenomenon is observed, where the model yields to a near-zero output and fails to recover even the frequency components that are within its representational capacity. To overcome this, we propose WINNER - Weight Initialization with Noise for Neural Representations. WINNER perturbs uniformly initialized weights of base SIREN with Gaussian noise - whose noise scales are adaptively determined by the spectral centroid of the target signal. Similar to random Fourier embeddings, this mitigates 'spectral bias' but without introducing additional trainable parameters. Our method achieves state-of-the-art audio fitting and significant gains in image and 3D shape fitting tasks over base SIREN. Beyond signal fitting, WINNER suggests new avenues in adaptive, target-aware initialization strategies for optimizing deep neural network training. For code and data visit cfdlabtechnion.github.io/siren_square/.
Related papers
- FUTON: Fourier Tensor Network for Implicit Neural Representations [56.48739018255443]
Implicit neural representations (INRs) have emerged as powerful tools for encoding signals, yet dominant-based designs often suffer from slow convergence, overfitting to noise, and poor extrapolation.<n>We introduce FUTON, which models signals as generalized Fourier series whose coefficients are parameterized by a low-rank tensor decomposition.
arXiv Detail & Related papers (2026-02-13T19:31:44Z) - Adaptive Training of INRs via Pruning and Densification [6.759337697337581]
We introduce AIRe, an adaptive training scheme that refines the implicit neural representations over the course of optimization.<n>Our method uses a neuron pruning mechanism to avoid redundancy and input frequency densification to improve representation capacity.<n>Code and pretrained models will be released for public use.
arXiv Detail & Related papers (2025-10-27T23:52:46Z) - Spectral Bottleneck in Deep Neural Networks: Noise is All You Need [0.0]
We study the challenge of fitting high-frequency-dominant signals susceptible to spectral bottleneck.<n>To effectively fit any target signal irrespective of it's frequency content, we propose a generalized target perturbation scheme.<n>We show that the noise scales can provide control over the spectra of network activations and the eigenbasis of the empirical neural tangent kernel.
arXiv Detail & Related papers (2025-09-09T22:16:24Z) - FLAIR: Frequency- and Locality-Aware Implicit Neural Representations [13.614373731196272]
Implicit Representations (INRs) leverage neural networks to map coordinates to corresponding signals, enabling continuous and compact representations.<n>Existing INRs lack frequency selectivity, spatial localization, and sparse representations, leading to an over-reliance on redundant signal components.<n>We propose FLAIR (Frequency- and Locality-Aware Implicit Representations), which incorporates two key innovations.
arXiv Detail & Related papers (2025-08-19T06:06:04Z) - STAF: Sinusoidal Trainable Activation Functions for Implicit Neural Representation [7.2888019138115245]
Implicit Neural Representations (INRs) have emerged as a powerful framework for modeling continuous signals.<n>The spectral bias of ReLU-based networks is a well-established limitation, restricting their ability to capture fine-grained details in target signals.<n>We introduce Sinusoidal Trainable Functions Activation (STAF)<n>STAF inherently modulates its frequency components, allowing for self-adaptive spectral learning.
arXiv Detail & Related papers (2025-02-02T18:29:33Z) - Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks [1.5124439914522694]
We introduce a theoretical framework that explains the capacity property of sinusoidal networks.<n>We show how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies.<n>Our method, referred to as TUNER, greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions.
arXiv Detail & Related papers (2024-07-30T18:24:46Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Conditioning Trick for Training Stable GANs [70.15099665710336]
We propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training.
We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomposition.
arXiv Detail & Related papers (2020-10-12T16:50:22Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.