FM-SIREN & FM-FINER: Nyquist-Informed Frequency Multiplier for Implicit Neural Representation with Periodic Activation
- URL: http://arxiv.org/abs/2509.23438v2
- Date: Tue, 30 Sep 2025 17:13:25 GMT
- Title: FM-SIREN & FM-FINER: Nyquist-Informed Frequency Multiplier for Implicit Neural Representation with Periodic Activation
- Authors: Mohammed Alsakabi, Wael Mobeirek, John M. Dolan, Ozan K. Tonguz,
- Abstract summary: We propose FM-SIREN and FM-FINER, which assign Nyquist-informed, neuron-specific frequency multipliers to periodic activations.<n>This simple yet principled modification reduces the redundancy of features by nearly 50% and consistently improves signal reconstruction across diverse INR tasks.
- Score: 15.456760941404873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing periodic activation-based implicit neural representation (INR) networks, such as SIREN and FINER, suffer from hidden feature redundancy, where neurons within a layer capture overlapping frequency components due to the use of a fixed frequency multiplier. This redundancy limits the expressive capacity of multilayer perceptrons (MLPs). Drawing inspiration from classical signal processing methods such as the Discrete Sine Transform (DST), we propose FM-SIREN and FM-FINER, which assign Nyquist-informed, neuron-specific frequency multipliers to periodic activations. Unlike existing approaches, our design introduces frequency diversity without requiring hyperparameter tuning or additional network depth. This simple yet principled modification reduces the redundancy of features by nearly 50% and consistently improves signal reconstruction across diverse INR tasks, including fitting 1D audio, 2D image and 3D shape, and synthesis of neural radiance fields (NeRF), outperforming their baseline counterparts while maintaining efficiency.
Related papers
- Subtractive Modulative Network with Learnable Periodic Activations [59.89799070130572]
We propose a novel, parameter-efficient Implicit Neural Representation architecture inspired by classical subtractive synthesis.<n>Our SMN achieves a PSNR of $40+$ dB on two image datasets, comparing favorably against state-of-the-art methods in terms of both reconstruction accuracy and parameter efficiency.
arXiv Detail & Related papers (2026-02-18T10:20:50Z) - Dendritic Resonate-and-Fire Neuron for Effective and Efficient Long Sequence Modeling [66.0841376808143]
Dendritic Resonate-and-Fire (RF) neurons can efficiently extract frequency from input signals and encode them into spike trains.<n>RF neurons exhibit limited effective memory capacity and a trade-off between energy efficiency and training speed on complex tasks.<n>We propose a Dendritic Resonate-and-Fire (D-RF) model, which explicitly incorporates a multi-dendritic and soma architecture.
arXiv Detail & Related papers (2025-09-21T18:15:45Z) - FLAIR: Frequency- and Locality-Aware Implicit Neural Representations [13.614373731196272]
Implicit Representations (INRs) leverage neural networks to map coordinates to corresponding signals, enabling continuous and compact representations.<n>Existing INRs lack frequency selectivity, spatial localization, and sparse representations, leading to an over-reliance on redundant signal components.<n>We propose FLAIR (Frequency- and Locality-Aware Implicit Representations), which incorporates two key innovations.
arXiv Detail & Related papers (2025-08-19T06:06:04Z) - Neuromorphic Wireless Split Computing with Resonate-and-Fire Neurons [69.73249913506042]
This paper investigates a wireless split computing architecture that employs resonate-and-fire (RF) neurons to process time-domain signals directly.<n>By resonating at tunable frequencies, RF neurons extract time-localized spectral features while maintaining low spiking activity.<n> Experimental results show that the proposed RF-SNN architecture achieves comparable accuracy to conventional LIF-SNNs and ANNs.
arXiv Detail & Related papers (2025-06-24T21:14:59Z) - FADPNet: Frequency-Aware Dual-Path Network for Face Super-Resolution [70.61549422952193]
Face super-resolution (FSR) under limited computational costs remains an open problem.<n>Existing approaches typically treat all facial pixels equally, resulting in suboptimal allocation of computational resources.<n>We propose FADPNet, a Frequency-Aware Dual-Path Network that decomposes facial features into low- and high-frequency components.
arXiv Detail & Related papers (2025-06-17T02:33:42Z) - Cross-Frequency Implicit Neural Representation with Self-Evolving Parameters [52.574661274784916]
Implicit neural representation (INR) has emerged as a powerful paradigm for visual data representation.<n>We propose a self-evolving cross-frequency INR using the Haar wavelet transform (termed CF-INR), which decouples data into four frequency components and employs INRs in the wavelet space.<n>We evaluate CF-INR on a variety of visual data representation and recovery tasks, including image regression, inpainting, denoising, and cloud removal.
arXiv Detail & Related papers (2025-04-15T07:14:35Z) - FANeRV: Frequency Separation and Augmentation based Neural Representation for Video [32.35716293561769]
We present a Frequency Separation and Augmentation based Neural Representation for video (FANeRV)<n>FANeRV explicitly separates input frames into high and low-frequency components using discrete wavelet transform.<n>A specially designed gated network effectively fuses these frequency components for optimal reconstruction.
arXiv Detail & Related papers (2025-04-09T10:19:35Z) - Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks [1.5124439914522694]
We introduce a theoretical framework that explains the capacity property of sinusoidal networks.<n>We show how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies.<n>Our method, referred to as TUNER, greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions.
arXiv Detail & Related papers (2024-07-30T18:24:46Z) - FINER++: Building a Family of Variable-periodic Functions for Activating Implicit Neural Representation [39.116375158815515]
Implicit Neural Representation (INR) is causing a revolution in the field of signal processing.
INR techniques suffer from the "frequency"-specified spectral bias and capacity-convergence gap.
We propose the FINER++ framework by extending existing periodic/non-periodic activation functions to variable-periodic ones.
arXiv Detail & Related papers (2024-07-28T09:24:57Z) - Spatial Annealing for Efficient Few-shot Neural Rendering [73.49548565633123]
We introduce an accurate and efficient few-shot neural rendering method named textbfSpatial textbfAnnealing regularized textbfNeRF (textbfSANeRF)<n>By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot neural rendering methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - FINER: Flexible spectral-bias tuning in Implicit NEural Representation
by Variable-periodic Activation Functions [40.80112550091512]
Implicit Neural Representation is causing a revolution in the field of signal processing.
Current INR techniques suffer from a restricted capability to tune their supported frequency set.
We propose variable-periodic activation functions, for which we propose FINER.
We demonstrate the capabilities of FINER in the contexts of 2D image fitting, 3D signed distance field representation, and 5D neural fields radiance optimization.
arXiv Detail & Related papers (2023-12-05T02:23:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.