Optimizing Rank for High-Fidelity Implicit Neural Representations
- URL: http://arxiv.org/abs/2512.14366v1
- Date: Tue, 16 Dec 2025 12:52:30 GMT
- Title: Optimizing Rank for High-Fidelity Implicit Neural Representations
- Authors: Julian McGinnis, Florian A. Hölzl, Suprosanna Shit, Florentin Bieder, Paul Friedrich, Mark Mühlau, Björn Menze, Daniel Rueckert, Benedikt Wiestler,
- Abstract summary: Implicit Representations (INRs) based on vanilla Multi-Layer Perceptrons (MLPs) are widely believed to be incapable of representing high-frequency content.<n>In this paper, we challenge the notion that low-frequency bias of vanilla signals is an intrinsic, architectural limitation to learn high-frequency content.
- Score: 18.87164151867775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit Neural Representations (INRs) based on vanilla Multi-Layer Perceptrons (MLPs) are widely believed to be incapable of representing high-frequency content. This has directed research efforts towards architectural interventions, such as coordinate embeddings or specialized activation functions, to represent high-frequency signals. In this paper, we challenge the notion that the low-frequency bias of vanilla MLPs is an intrinsic, architectural limitation to learn high-frequency content, but instead a symptom of stable rank degradation during training. We empirically demonstrate that regulating the network's rank during training substantially improves the fidelity of the learned signal, rendering even simple MLP architectures expressive. Extensive experiments show that using optimizers like Muon, with high-rank, near-orthogonal updates, consistently enhances INR architectures even beyond simple ReLU MLPs. These substantial improvements hold across a diverse range of domains, including natural and medical images, and novel view synthesis, with up to 9 dB PSNR improvements over the previous state-of-the-art. Our project page, which includes code and experimental results, is available at: (https://muon-inrs.github.io).
Related papers
- FUTON: Fourier Tensor Network for Implicit Neural Representations [56.48739018255443]
Implicit neural representations (INRs) have emerged as powerful tools for encoding signals, yet dominant-based designs often suffer from slow convergence, overfitting to noise, and poor extrapolation.<n>We introduce FUTON, which models signals as generalized Fourier series whose coefficients are parameterized by a low-rank tensor decomposition.
arXiv Detail & Related papers (2026-02-13T19:31:44Z) - Adaptive Training of INRs via Pruning and Densification [6.759337697337581]
We introduce AIRe, an adaptive training scheme that refines the implicit neural representations over the course of optimization.<n>Our method uses a neuron pruning mechanism to avoid redundancy and input frequency densification to improve representation capacity.<n>Code and pretrained models will be released for public use.
arXiv Detail & Related papers (2025-10-27T23:52:46Z) - Moving Beyond Diffusion: Hierarchy-to-Hierarchy Autoregression for fMRI-to-Image Reconstruction [65.67001243986981]
We propose MindHier, a coarse-to-fine fMRI-to-image reconstruction framework built on scale-wise autoregressive modeling.<n>MindHier achieves superior semantic fidelity, 4.67x faster inference, and more deterministic results than the diffusion-based baselines.
arXiv Detail & Related papers (2025-10-25T15:40:07Z) - High-Frequency First: A Two-Stage Approach for Improving Image INR [13.070432644808806]
Implicit Neural Representations (INRs) have emerged as a powerful alternative to traditional pixel-based formats.<n>A key challenge lies in the spectral bias of neural networks, which tend to favor low-frequency components while struggling to capture high-frequency details.<n>We introduce a two-stage training strategy where a neighbor-aware soft mask adaptively assigns higher weights to pixels with strong local variations.
arXiv Detail & Related papers (2025-08-21T13:56:36Z) - FADPNet: Frequency-Aware Dual-Path Network for Face Super-Resolution [70.61549422952193]
Face super-resolution (FSR) under limited computational costs remains an open problem.<n>Existing approaches typically treat all facial pixels equally, resulting in suboptimal allocation of computational resources.<n>We propose FADPNet, a Frequency-Aware Dual-Path Network that decomposes facial features into low- and high-frequency components.
arXiv Detail & Related papers (2025-06-17T02:33:42Z) - STAF: Sinusoidal Trainable Activation Functions for Implicit Neural Representation [7.2888019138115245]
Implicit Neural Representations (INRs) have emerged as a powerful framework for modeling continuous signals.<n>The spectral bias of ReLU-based networks is a well-established limitation, restricting their ability to capture fine-grained details in target signals.<n>We introduce Sinusoidal Trainable Functions Activation (STAF)<n>STAF inherently modulates its frequency components, allowing for self-adaptive spectral learning.
arXiv Detail & Related papers (2025-02-02T18:29:33Z) - SL$^{2}$A-INR: Single-Layer Learnable Activation for Implicit Neural Representation [10.079590259767306]
Implicit Neural Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.<n>We show that these challenges can be alleviated by introducing a novel approach in INR architecture.<n>Specifically, we propose SL$2$A-INR, a hybrid network that combines a single-layer learnable activation function with an synthesis that uses traditional ReLU activations.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - Conv-INR: Convolutional Implicit Neural Representation for Multimodal Visual Signals [2.7195102129095003]
Implicit neural representation (INR) has recently emerged as a promising paradigm for signal representations.
This paper proposes Conv-INR, the first INR model fully based on convolution.
arXiv Detail & Related papers (2024-06-06T16:52:42Z) - Coordinate-Aware Modulation for Neural Fields [11.844561374381575]
We propose a novel way for exploiting both synthesiss and grid representations in neural fields.
We suggest a Neural Coordinate-Aware Modulation (CAM), which modulates the parameters using scale and shift features extracted from the grid representations.
arXiv Detail & Related papers (2023-11-25T10:42:51Z) - FAMLP: A Frequency-Aware MLP-Like Architecture For Domain Generalization [73.41395947275473]
We propose a novel frequency-aware architecture, in which the domain-specific features are filtered out in the transformed frequency domain.
Experiments on three benchmarks demonstrate significant performance, outperforming the state-of-the-art methods by a margin of 3%, 4% and 9%, respectively.
arXiv Detail & Related papers (2022-03-24T07:26:29Z) - Uncovering the Over-smoothing Challenge in Image Super-Resolution: Entropy-based Quantification and Contrastive Optimization [67.99082021804145]
We propose an explicit solution to the COO problem, called Detail Enhanced Contrastive Loss (DECLoss)
DECLoss utilizes the clustering property of contrastive learning to directly reduce the variance of the potential high-resolution distribution.
We evaluate DECLoss on multiple super-resolution benchmarks and demonstrate that it improves the perceptual quality of PSNR-oriented models.
arXiv Detail & Related papers (2022-01-04T08:30:09Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.