SBS: Enhancing Parameter-Efficiency of Neural Representations for Neural Networks via Spectral Bias Suppression
- URL: http://arxiv.org/abs/2509.07373v1
- Date: Tue, 09 Sep 2025 03:48:57 GMT
- Title: SBS: Enhancing Parameter-Efficiency of Neural Representations for Neural Networks via Spectral Bias Suppression
- Authors: Qihu Xie, Yuan Li, Yi Kang,
- Abstract summary: Implicit neural representations have been extended to represent convolutional neural network weights via neural representation for neural networks.<n>Standard multi-layer perceptrons used in neural representation for neural networks exhibit a pronounced spectral bias.<n>We propose SBS, a parameter-efficient enhancement to neural representation for neural networks that suppresses spectral bias.
- Score: 6.410718573605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representations have recently been extended to represent convolutional neural network weights via neural representation for neural networks, offering promising parameter compression benefits. However, standard multi-layer perceptrons used in neural representation for neural networks exhibit a pronounced spectral bias, hampering their ability to reconstruct high-frequency details effectively. In this paper, we propose SBS, a parameter-efficient enhancement to neural representation for neural networks that suppresses spectral bias using two techniques: (1) a unidirectional ordering-based smoothing that improves kernel smoothness in the output space, and (2) unidirectional ordering-based smoothing aware random fourier features that adaptively modulate the frequency bandwidth of input encodings based on layer-wise parameter count. Extensive evaluations on various ResNet models with datasets CIFAR-10, CIFAR-100, and ImageNet, demonstrate that SBS achieves significantly better reconstruction accuracy with less parameters compared to SOTA.
Related papers
- Adaptive Training of INRs via Pruning and Densification [6.759337697337581]
We introduce AIRe, an adaptive training scheme that refines the implicit neural representations over the course of optimization.<n>Our method uses a neuron pruning mechanism to avoid redundancy and input frequency densification to improve representation capacity.<n>Code and pretrained models will be released for public use.
arXiv Detail & Related papers (2025-10-27T23:52:46Z) - Encoding Optimization for Low-Complexity Spiking Neural Network Equalizers in IM/DD Systems [49.34817254755008]
We propose a reinforcement learning-based algorithm to optimize spiking neural networks (SNNs)<n>applied to an SNN-based equalizer and demapper in an IM/DD system, the method improves performance while reducing computational load and network size.
arXiv Detail & Related papers (2025-08-19T12:32:13Z) - Training Neural Networks by Optimizing Neuron Positions [39.682133213072554]
We propose a parameter-efficient neural architecture where neurons are embedded in Euclidean space.<n>During training, their positions are optimized and synaptic weights are determined as the inverse of the spatial distance between connected neurons.<n>These distance-dependent wiring rules replace traditional learnable weight matrices and significantly reduce the number of parameters while introducing a biologically inspired inductive bias.
arXiv Detail & Related papers (2025-06-16T12:26:13Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - A Tunable Despeckling Neural Network Stabilized via Diffusion Equation [15.996302571895045]
Adrialversa attacks can be used as a criterion for judging the adaptability of neural networks to real data.<n>We propose a tunable, regularized neural network framework that unrolls a shallow denoising neural network block and a diffusion regularity block into a single network for end-to-end training.
arXiv Detail & Related papers (2024-11-24T17:08:43Z) - Residual resampling-based physics-informed neural network for neutron diffusion equations [7.105073499157097]
The neutron diffusion equation plays a pivotal role in the analysis of nuclear reactors.
Traditional PINN approaches often utilize fully connected network (FCN) architecture.
R2-PINN effectively overcomes the limitations inherent in current methods, providing more accurate and robust solutions for neutron diffusion equations.
arXiv Detail & Related papers (2024-06-23T13:49:31Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - SynA-ResNet: Spike-driven ResNet Achieved through OR Residual Connection [10.702093960098104]
Spiking Neural Networks (SNNs) have garnered substantial attention in brain-like computing for their biological fidelity and the capacity to execute energy-efficient spike-driven operations.
We propose a novel training paradigm that first accumulates a large amount of redundant information through OR Residual Connection (ORRC)
We then filters out the redundant information using the Synergistic Attention (SynA) module, which promotes feature extraction in the backbone while suppressing the influence of noise and useless features in the shortcuts.
arXiv Detail & Related papers (2023-11-11T13:36:27Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.