Structured Knowledge Accumulation: The Principle of Entropic Least Action in Forward-Only Neural Learning
- URL: http://arxiv.org/abs/2504.03214v1
- Date: Fri, 04 Apr 2025 07:00:27 GMT
- Title: Structured Knowledge Accumulation: The Principle of Entropic Least Action in Forward-Only Neural Learning
- Authors: Bouarfa Mahi Quantiota,
- Abstract summary: We introduce two core concepts: the Net function and the characteristic time property of neural learning.<n>By understanding learning as a time-based process, we open new directions for building efficient, robust, and biologically-inspired AI systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper aims to extend the Structured Knowledge Accumulation (SKA) framework recently proposed by \cite{mahi2025ska}. We introduce two core concepts: the Tensor Net function and the characteristic time property of neural learning. First, we reinterpret the learning rate as a time step in a continuous system. This transforms neural learning from discrete optimization into continuous-time evolution. We show that learning dynamics remain consistent when the product of learning rate and iteration steps stays constant. This reveals a time-invariant behavior and identifies an intrinsic timescale of the network. Second, we define the Tensor Net function as a measure that captures the relationship between decision probabilities, entropy gradients, and knowledge change. Additionally, we define its zero-crossing as the equilibrium state between decision probabilities and entropy gradients. We show that the convergence of entropy and knowledge flow provides a natural stopping condition, replacing arbitrary thresholds with an information-theoretic criterion. We also establish that SKA dynamics satisfy a variational principle based on the Euler-Lagrange equation. These findings extend SKA into a continuous and self-organizing learning model. The framework links computational learning with physical systems that evolve by natural laws. By understanding learning as a time-based process, we open new directions for building efficient, robust, and biologically-inspired AI systems.
Related papers
- Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Structured Knowledge Accumulation: An Autonomous Framework for Layer-Wise Entropy Reduction in Neural Learning [0.0]
We introduce the Structured Knowledge Accumulation (SKA) framework, which reinterprets entropy as a dynamic, layer-wise measure of knowledge alignment in neural networks.<n>SKA defines entropy in terms of knowledge vectors and their influence on decision probabilities across multiple layers.<n>This approach provides a scalable, biologically plausible alternative to gradient-based learning, bridging information theory and artificial intelligence.
arXiv Detail & Related papers (2025-03-18T06:14:20Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Emergent learning in physical systems as feedback-based aging in a
glassy landscape [0.0]
We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces.
We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems.
arXiv Detail & Related papers (2023-09-08T15:24:55Z) - IF2Net: Innately Forgetting-Free Networks for Continual Learning [49.57495829364827]
Continual learning can incrementally absorb new concepts without interfering with previously learned knowledge.
Motivated by the characteristics of neural networks, we investigated how to design an Innately Forgetting-Free Network (IF2Net)
IF2Net allows a single network to inherently learn unlimited mapping rules without telling task identities at test time.
arXiv Detail & Related papers (2023-06-18T05:26:49Z) - On the Dynamics of Learning Time-Aware Behavior with Recurrent Neural
Networks [2.294014185517203]
We introduce a family of supervised learning tasks dependent on hidden temporal variables.
We train RNNs to emulate temporal flipflops that emphasize the need for time-awareness over long-term memory.
We show that these RNNs learn to switch between periodic orbits that encode time modulo the period of the transition rules.
arXiv Detail & Related papers (2023-06-12T14:01:30Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Statistical mechanics of continual learning: variational principle and
mean-field potential [1.559929646151698]
We focus on continual learning in single-layered and multi-layered neural networks of binary weights.
A variational Bayesian learning setting is proposed, where the neural networks are trained in a field-space.
Weight uncertainty is naturally incorporated, and modulates synaptic resources among tasks.
Our proposed frameworks also connect to elastic weight consolidation, weight-uncertainty learning, and neuroscience inspired metaplasticity.
arXiv Detail & Related papers (2022-12-06T09:32:45Z) - Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory [110.99247009159726]
Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.
In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise.
arXiv Detail & Related papers (2020-06-08T17:25:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.