Structured Transformations for Stable and Interpretable Neural Computation
- URL: http://arxiv.org/abs/2508.00127v1
- Date: Thu, 31 Jul 2025 19:26:45 GMT
- Title: Structured Transformations for Stable and Interpretable Neural Computation
- Authors: Saleh Nikooroo, Thomas Engel,
- Abstract summary: We introduce a reformulation of layer-level transformations that departs from the standard unconstrained affine paradigm.<n>Our formulation encourages internal consistency and supports stable information flow across depth.<n>We show that models constructed with these structured transformations exhibit improved gradient conditioning, reduced sensitivity to perturbations, and layer-wise robustness.
- Score: 1.2064681974642195
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite their impressive performance, contemporary neural networks often lack structural safeguards that promote stable learning and interpretable behavior. In this work, we introduce a reformulation of layer-level transformations that departs from the standard unconstrained affine paradigm. Each transformation is decomposed into a structured linear operator and a residual corrective component, enabling more disciplined signal propagation and improved training dynamics. Our formulation encourages internal consistency and supports stable information flow across depth, while remaining fully compatible with standard learning objectives and backpropagation. Through a series of synthetic and real-world experiments, we demonstrate that models constructed with these structured transformations exhibit improved gradient conditioning, reduced sensitivity to perturbations, and layer-wise robustness. We further show that these benefits persist across architectural scales and training regimes. This study serves as a foundation for a more principled class of neural architectures that prioritize stability and transparency-offering new tools for reasoning about learning behavior without sacrificing expressive power.
Related papers
- Understanding Learning Dynamics Through Structured Representations [1.2064681974642195]
This paper investigates how internal structural choices shape the behavior of learning systems.<n>We analyze how these structures influence gradient flow, spectral sensitivity, and fixed-point behavior.<n>Rather than prescribing fixed templates, we emphasize principles of tractable design that can steer learning behavior in interpretable ways.
arXiv Detail & Related papers (2025-08-04T07:15:57Z) - Hierarchical Feature-level Reverse Propagation for Post-Training Neural Networks [24.442592456755698]
End-to-end autonomous driving has emerged as a dominant paradigm, yet its highly entangled black-box models pose challenges in terms of interpretability and safety assurance.<n>This paper proposes a hierarchical and decoupled post-training framework tailored for pretrained neural networks.
arXiv Detail & Related papers (2025-06-08T15:19:03Z) - Model Hemorrhage and the Robustness Limits of Large Language Models [119.46442117681147]
Large language models (LLMs) demonstrate strong performance across natural language processing tasks, yet undergo significant performance degradation when modified for deployment.<n>We define this phenomenon as model hemorrhage - performance decline caused by parameter alterations and architectural changes.
arXiv Detail & Related papers (2025-03-31T10:16:03Z) - Latent Convergence Modulation in Large Language Models: A Novel Approach to Iterative Contextual Realignment [0.0]
A structured modulation mechanism was introduced to regulate hidden state transitions.<n>Lattice adjustments contributed to reductions in perplexity fluctuations, entropy variance, and lexical instability.
arXiv Detail & Related papers (2025-02-10T09:46:33Z) - Hierarchical Contextual Manifold Alignment for Structuring Latent Representations in Large Language Models [7.798982346197703]
The organization of latent token representations plays a crucial role in determining the stability, generalization, and contextual consistency of language models.<n>A hierarchical alignment method was introduced to token embeddings without altering core model weights.<n> Experimental evaluations demonstrated improvements in rare token retrieval, adversarial, and long-range dependency tracking.
arXiv Detail & Related papers (2025-02-06T04:01:27Z) - Super Level Sets and Exponential Decay: A Synergistic Approach to Stable Neural Network Training [0.0]
We develop a dynamic learning rate algorithm that integrates exponential decay and advanced anti-overfitting strategies.
We prove that the superlevel sets of the loss function, as influenced by our adaptive learning rate, are always connected.
arXiv Detail & Related papers (2024-09-25T09:27:17Z) - Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations [75.14793516745374]
We propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training.
Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking.
Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token.
arXiv Detail & Related papers (2024-07-05T14:29:44Z) - Normalization and effective learning rates in reinforcement learning [52.59508428613934]
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature.
We show that normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate.
We propose to make the learning rate schedule explicit with a simple re- parameterization which we call Normalize-and-Project.
arXiv Detail & Related papers (2024-07-01T20:58:01Z) - Latent Traversals in Generative Models as Potential Flows [113.4232528843775]
We propose to model latent structures with a learned dynamic potential landscape.
Inspired by physics, optimal transport, and neuroscience, these potential landscapes are learned as physically realistic partial differential equations.
Our method achieves both more qualitatively and quantitatively disentangled trajectories than state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-25T15:53:45Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.