The Inlet Rank Collapse in Implicit Neural Representations: Diagnosis and Unified Remedy
- URL: http://arxiv.org/abs/2602.01526v1
- Date: Mon, 02 Feb 2026 01:38:19 GMT
- Title: The Inlet Rank Collapse in Implicit Neural Representations: Diagnosis and Unified Remedy
- Authors: Jianqiao Zheng, Hemanth Saratchandran, Simon Lucey,
- Abstract summary: Implicit Neural Representations (INRs) have revolutionized continuous signal modeling, yet they struggle to recover fine-grained details within finite training budgets.<n>We introduce a structural diagnostic framework to identify the Inlet Rank Collapse'', a phenomenon where the low-dimensional input coordinates fail to span the high-dimensional embedding space.<n>We derive a Rank-Expanding Initialization, a minimalist remedy that ensures the representation rank scales with the layer width without architectural modifications or computational overhead.
- Score: 30.776360295485762
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Implicit Neural Representations (INRs) have revolutionized continuous signal modeling, yet they struggle to recover fine-grained details within finite training budgets. While empirical techniques, such as positional encoding (PE), sinusoidal activations (SIREN), and batch normalization (BN), effectively mitigate this, their theoretical justifications are predominantly post hoc, focusing on the global NTK spectrum only after modifications are applied. In this work, we reverse this paradigm by introducing a structural diagnostic framework. By performing a layer-wise decomposition of the NTK, we mathematically identify the ``Inlet Rank Collapse'': a phenomenon where the low-dimensional input coordinates fail to span the high-dimensional embedding space, creating a fundamental rank deficiency at the first layer that acts as an expressive bottleneck for the entire network. This framework provides a unified perspective to re-interpret PE, SIREN, and BN as different forms of rank restoration. Guided by this diagnosis, we derive a Rank-Expanding Initialization, a minimalist remedy that ensures the representation rank scales with the layer width without architectural modifications or computational overhead. Our results demonstrate that this principled remedy enables standard MLPs to achieve high-fidelity reconstructions, proving that the key to empowering INRs lies in the structural optimization of the initial rank propagation to effectively populate the latent space.
Related papers
- The Key to State Reduction in Linear Attention: A Rank-based Perspective [8.006873922525275]
Recent empirical results indicate that the hidden state of trained linear attention models often exhibits a low-rank structure.<n>We provide a theoretical analysis of the role of rank in linear attention, revealing that low effective rank can affect retrieval error by amplifying query noise.<n>In addition to these theoretical insights, we conjecture that the low-rank states can be substantially reduced post-training.
arXiv Detail & Related papers (2026-02-04T18:39:38Z) - A new initialisation to Control Gradients in Sinusoidal Neural network [9.341735544356167]
We propose a new initialisation for networks with sinusoidal activation functions such as textttSIREN.<n> Controlling both gradients and targeting vanishing pre-activation helps preventing the emergence of inappropriate frequencies during estimation.<n>New initialisation consistently outperforms state-of-the-art methods across a wide range of reconstruction tasks.
arXiv Detail & Related papers (2025-12-06T13:23:03Z) - Attention Saturation and Gradient Suppression at Inflection Layers: Diagnosing and Mitigating Bottlenecks in Transformer Adaptation [0.0]
Pre-trained Transformers often exhibit over-confidence in source patterns and difficulty in forming new target-domain patterns during fine-tuning.<n>We formalize the mechanism of output saturation leading to gradient suppression through standard cross-entropy and softmax analysis.<n>We propose a diagnose-first, inject-light fine-tuning strategy: selectively inserting LoRA adapters at inflection layers to restore suppressed backward signals.
arXiv Detail & Related papers (2025-11-02T04:32:41Z) - Feedback Alignment Meets Low-Rank Manifolds: A Structured Recipe for Local Learning [7.034739490820967]
Training deep neural networks (DNNs) with backpropagation (BP) achieves state-of-the-art accuracy but requires global error propagation and full parameterization.<n>Direct Feedback Alignment (DFA) enables local, parallelizable updates with lower memory requirements.<n>We propose a structured local learning framework that operates directly on low-rank manifold.
arXiv Detail & Related papers (2025-10-29T15:03:46Z) - Moving Beyond Diffusion: Hierarchy-to-Hierarchy Autoregression for fMRI-to-Image Reconstruction [65.67001243986981]
We propose MindHier, a coarse-to-fine fMRI-to-image reconstruction framework built on scale-wise autoregressive modeling.<n>MindHier achieves superior semantic fidelity, 4.67x faster inference, and more deterministic results than the diffusion-based baselines.
arXiv Detail & Related papers (2025-10-25T15:40:07Z) - Neural Collapse under Gradient Flow on Shallow ReLU Networks for Orthogonally Separable Data [52.737775129027575]
We show that gradient flow on a two-layer ReLU network for classifying orthogonally separable data provably exhibits Neural Collapse (NC)<n>We reveal the role of the implicit bias of the training dynamics in facilitating the emergence of NC.
arXiv Detail & Related papers (2025-10-24T01:36:19Z) - Eigen Neural Network: Unlocking Generalizable Vision with Eigenbasis [5.486667906157719]
Eigen Neural Network (ENN) is a novel architecture that re parameterizes each layer's weights in a layer-shared, learned orthonormal eigenbasis.<n>When integrated with standard BP, ENN consistently outperforms state-of-the-art methods on large-scale image classification benchmarks.
arXiv Detail & Related papers (2025-08-02T06:33:58Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Non-local Meets Global: An Iterative Paradigm for Hyperspectral Image
Restoration [66.68541690283068]
We propose a unified paradigm combining the spatial and spectral properties for hyperspectral image restoration.
The proposed paradigm enjoys performance superiority from the non-local spatial denoising and light computation complexity.
Experiments on HSI denoising, compressed reconstruction, and inpainting tasks, with both simulated and real datasets, demonstrate its superiority.
arXiv Detail & Related papers (2020-10-24T15:53:56Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.