On Broken Symmetry in Cognition
- URL: http://arxiv.org/abs/2303.06047v2
- Date: Thu, 12 Jun 2025 13:43:32 GMT
- Title: On Broken Symmetry in Cognition
- Authors: Xin Li,
- Abstract summary: This paper argues that both cognitive evolution and development unfold via symmetry-breaking transitions.<n>First, spatial symmetry is broken through bilateral body plans and neural codes like grid and place cells.<n>Third, goal-directed simulation breaks symmetry between internal self-models and the external world.
- Score: 5.234742752529437
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Cognition is not passive data accumulation but the active resolution of uncertainty through symmetry breaking. This paper argues that both cognitive evolution and development unfold via sequential symmetry-breaking transitions that disrupt innate regularities across space, time, self, and representation. First, spatial symmetry is broken through bilateral body plans and neural codes like grid and place cells, which privilege egocentric orientation and localized encoding. Second, reinforcement learning introduces temporal asymmetry by favoring future rewards, establishing a directional flow of inference. Third, goal-directed simulation breaks spatiotemporal symmetry between internal self-models and the external world, enabling embodied inference and solving the combinatorial search problem. Fourth, social cognition via mentalizing and imitation breaks the symmetry between minds, allowing agents to infer others' beliefs. Finally, language imposes a linear, recursive structure onto unordered thought, breaking expressive symmetry through syntax and grammar. These asymmetries are unified by the Context-Content Uncertainty Principle (CCUP), which frames cognition as a cyclical entropy-minimizing process. At the core lies the principle of structure-before-specificity: ambiguous input is first mapped onto stable latent structures before being bound to specific instances. This promotes generalization, reduces sample complexity, and prevents overfitting. Inverting inference, from content back to context, further breaks the curse of dimensionality by constraining inference to goal-consistent manifolds. Thus, symmetry breaking is not incidental but the foundational mechanism by which cognition organizes, stabilizes, and scales intelligent behavior in an uncertain and dynamic world.
Related papers
- Generalized Linear Mode Connectivity for Transformers [87.32299363530996]
A striking phenomenon is linear mode connectivity (LMC), where independently trained models can be connected by low- or zero-loss paths.<n>Prior work has predominantly focused on neuron re-ordering through permutations, but such approaches are limited in scope.<n>We introduce a unified framework that captures four symmetry classes: permutations, semi-permutations, transformations, and general invertible maps.<n>This generalization enables, for the first time, the discovery of low- and zero-barrier linear paths between independently trained Vision Transformers and GPT-2 models.
arXiv Detail & Related papers (2025-06-28T01:46:36Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - On Context-Content Uncertainty Principle [5.234742752529437]
We develop a layered computational framework that derives operational principles from the Context-Content Uncertainty Principle.<n>At the base level, CCUP formalizes inference as directional entropy minimization, establishing a variational gradient that favors content-first structuring.<n>We present formal equivalence theorems, a dependency lattice among principles, and computational simulations demonstrating the efficiency gains of CCUP-aligned inference.
arXiv Detail & Related papers (2025-06-25T17:21:19Z) - Translation symmetry restoration in integrable systems: the noninteracting case [0.16385815610837165]
We study translation symmetry restoration in integrable systems.<n>In particular, we consider non-interacting spinless fermions on the lattice prepared in non-equilibrium states invariant under $nu>1$ lattice shifts.<n>We show that, differently from random unitary circuits where symmetry restoration occurs abruptly for times proportional to the subsystem size, here symmetry is restored smoothly and over timescales of the order of the subsystem size squared.
arXiv Detail & Related papers (2025-06-17T14:11:31Z) - Self-Organizing Graph Reasoning Evolves into a Critical State for Continuous Discovery Through Structural-Semantic Dynamics [0.0]
We show how agentic graph reasoning systems spontaneously evolve toward a critical state that sustains continuous semantic discovery.<n>We identify a subtle yet robust regime in which semantic entropy dominates over structural entropy.<n>Our findings provide practical strategies for engineering intelligent systems with intrinsic capacities for long-term discovery and adaptation.
arXiv Detail & Related papers (2025-03-24T16:30:37Z) - BrainMAP: Learning Multiple Activation Pathways in Brain Networks [77.15180533984947]
We introduce a novel framework BrainMAP to learn Multiple Activation Pathways in Brain networks.
Our framework enables explanatory analyses of crucial brain regions involved in tasks.
arXiv Detail & Related papers (2024-12-23T09:13:35Z) - Exceptional Points and Stability in Nonlinear Models of Population Dynamics having $\mathcal{PT}$ symmetry [49.1574468325115]
We analyze models governed by the replicator equation of evolutionary game theory and related Lotka-Volterra systems of population dynamics.<n>We study the emergence of exceptional points in two cases: (a) when the governing symmetry properties are tied to global properties of the models, and (b) when these symmetries emerge locally around stationary states.
arXiv Detail & Related papers (2024-11-19T02:15:59Z) - NeuroBind: Towards Unified Multimodal Representations for Neural Signals [20.02503060795981]
We present NeuroBind, a representation that unifies multiple brain signal types, including EEG, fMRI, calcium imaging, and spiking data.
This approach holds significant potential for advancing neuroscience research, improving AI systems, and developing neuroprosthetics and brain-computer interfaces.
arXiv Detail & Related papers (2024-07-19T04:42:52Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Bayesian Theory of Consciousness as Exchangeable Emotion-Cognition Inference [5.234742752529437]
This paper proposes a unified framework in which consciousness emerges as a cycle-consistent, affectively anchored inference process.<n>We formalize emotion as a low-dimensional structural prior and cognition as a specificity-instantiating update.<n>This emotion-cognition cycle minimizes joint uncertainty by aligning emotionally weighted priors with context-sensitive cognitive appraisals.
arXiv Detail & Related papers (2024-05-17T17:06:19Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural
Networks [20.99799416963467]
In the human brain, neuronal diversity is an enabling factor for all kinds of biological intelligent behaviors.
In this Primer, we first discuss the preliminaries of biological neuronal diversity and the characteristics of information transmission and processing in a biological neuron.
arXiv Detail & Related papers (2023-01-23T02:23:45Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Entanglement-enabled symmetry-breaking orders [0.0]
A spontaneous symmetry-breaking order is conventionally described by a tensor-product wave-function of some few-body clusters.
We discuss a type of symmetry-breaking orders, dubbed entanglement-enabled symmetry-breaking orders, which cannot be realized by any tensor-product state.
arXiv Detail & Related papers (2022-07-18T18:00:00Z) - Simultaneous Transport Evolution for Minimax Equilibria on Measures [48.82838283786807]
Min-max optimization problems arise in several key machine learning setups, including adversarial learning and generative modeling.
In this work we focus instead in finding mixed equilibria, and consider the associated lifted problem in the space of probability measures.
By adding entropic regularization, our main result establishes global convergence towards the global equilibrium.
arXiv Detail & Related papers (2022-02-14T02:23:16Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Localisation in quasiperiodic chains: a theory based on convergence of
local propagators [68.8204255655161]
We present a theory of localisation in quasiperiodic chains with nearest-neighbour hoppings, based on the convergence of local propagators.
Analysing the convergence of these continued fractions, localisation or its absence can be determined, yielding in turn the critical points and mobility edges.
Results are exemplified by analysing the theory for three quasiperiodic models covering a range of behaviour.
arXiv Detail & Related papers (2021-02-18T16:19:52Z) - A Graph Neural Network Framework for Causal Inference in Brain Networks [0.3392372796177108]
A central question in neuroscience is how self-organizing dynamic interactions in the brain emerge on their relatively static backbone.
We present a graph neural network (GNN) framework to describe functional interactions based on structural anatomical layout.
We show that GNNs are able to capture long-term dependencies in data and also scale up to the analysis of large-scale networks.
arXiv Detail & Related papers (2020-10-14T15:01:21Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.