Developmental Symmetry-Loss: A Free-Energy Perspective on Brain-Inspired Invariance Learning
- URL: http://arxiv.org/abs/2512.10984v2
- Date: Mon, 15 Dec 2025 20:56:17 GMT
- Title: Developmental Symmetry-Loss: A Free-Energy Perspective on Brain-Inspired Invariance Learning
- Authors: Arif Dönmez,
- Abstract summary: We propose Symmetry-Loss, a brain-inspired algorithmic principle.<n>We show how Symmetry-Loss operationalizes a Free-Energy-like objective for representation learning.<n>The result is a general computational mechanism linking developmental learning in the brain with principled representation learning in artificial systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Symmetry-Loss, a brain-inspired algorithmic principle that enforces invariance and equivariance through a differentiable constraint derived from environmental symmetries. The framework models learning as the iterative refinement of an effective symmetry group, paralleling developmental processes in which cortical representations align with the world's structure. By minimizing structural surprise, i.e. deviations from symmetry consistency, Symmetry-Loss operationalizes a Free-Energy--like objective for representation learning. This formulation bridges predictive-coding and group-theoretic perspectives, showing how efficient, stable, and compositional representations can emerge from symmetry-based self-organization. The result is a general computational mechanism linking developmental learning in the brain with principled representation learning in artificial systems.
Related papers
- Loss Landscape Geometry and the Learning of Symmetries: Or, What Influence Functions Reveal About Robust Generalization [0.14201057456467273]
We study how neural emulators internalize physical symmetries by introducing an influence-based diagnostic.<n>This quantity probes the local geometry of the learned loss landscape.<n>We show that orbit-wise gradient coherence provides the mechanism for learning to generalize over symmetry transformations.
arXiv Detail & Related papers (2026-01-28T02:14:01Z) - Reinforcement Learning Using known Invariances [54.91261509214309]
This paper develops a theoretical framework for incorporating known group symmetries into kernel-based reinforcement learning.<n>We show that symmetry-aware RL achieves significantly better performance than their standard kernel counterparts.
arXiv Detail & Related papers (2025-11-05T13:56:14Z) - Partial Symmetry Enforced Attention Decomposition (PSEAD): A Group-Theoretic Framework for Equivariant Transformers in Biological Systems [0.0]
This research introduces the Theory of Partial Symmetry Enforced Attention Decomposition (PSEAD)<n>We formalize the concept of local permutation subgroup actions on windows of biological data, proving that under such actions, the attention mechanism naturally decomposes into a direct sum of irreducible components.<n>This work lays the groundwork for a new generation of biologically informed, symmetry-aware artificial intelligence models.
arXiv Detail & Related papers (2025-07-20T10:44:31Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - Symmetry From Scratch: Group Equivariance as a Supervised Learning Task [1.8570740863168362]
In machine learning datasets with symmetries, the paradigm for backward compatibility with symmetry-breaking has been to relax equivariant architectural constraints.
We introduce symmetry-cloning, a method for inducing equivariance in machine learning models.
arXiv Detail & Related papers (2024-10-05T00:44:09Z) - Bayesian Unsupervised Disentanglement of Anatomy and Geometry for Deep Groupwise Image Registration [59.062085785106234]
This article presents a general Bayesian learning framework for multi-modal groupwise image registration.<n>We propose a novel hierarchical variational auto-encoding architecture to realise the inference procedure of the latent variables.<n>Experiments were conducted to validate the proposed framework, including four different datasets from cardiac, brain, and abdominal medical images.
arXiv Detail & Related papers (2024-01-04T08:46:39Z) - Towards Information Theory-Based Discovery of Equivariances [1.2289361708127875]
The presence of symmetries imposes a stringent set of constraints on a system.
In parallel, principled models of complexity-constrained learning and behaviour make increasing use of information-theoretic methods.
We propose a novel variant of the Information Bottleneck principle, which has served as a productive basis for many principled studies of learning and information-constrained adaptive behaviour.
arXiv Detail & Related papers (2023-10-25T11:19:40Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Symmetry Induces Structure and Constraint of Learning [0.0]
We unveil the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models.
Common instances of mirror symmetries in deep learning include rescaling, rotation, and permutation symmetry.
We show that the theoretical framework can explain intriguing phenomena, such as the loss of plasticity and various collapse phenomena in neural networks.
arXiv Detail & Related papers (2023-09-29T02:21:31Z) - Evaluating the Robustness of Interpretability Methods through
Explanation Invariance and Equivariance [72.50214227616728]
Interpretability methods are valuable only if their explanations faithfully describe the explained model.
We consider neural networks whose predictions are invariant under a specific symmetry group.
arXiv Detail & Related papers (2023-04-13T17:59:03Z) - On discrete symmetries of robotics systems: A group-theoretic and
data-driven analysis [38.92081817503126]
We study discrete morphological symmetries of dynamical systems.
These symmetries arise from the presence of one or more planes/axis of symmetry in the system's morphology.
We exploit these symmetries using data augmentation and $G$-equivariant neural networks.
arXiv Detail & Related papers (2023-02-21T04:10:16Z) - On the Importance of Asymmetry for Siamese Representation Learning [53.86929387179092]
Siamese networks are conceptually symmetric with two parallel encoders.
We study the importance of asymmetry by explicitly distinguishing the two encoders within the network.
We find the improvements from asymmetric designs generalize well to longer training schedules, multiple other frameworks and newer backbones.
arXiv Detail & Related papers (2022-04-01T17:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.