Information Physics of Intelligence: Unifying Logical Depth and Entropy under Thermodynamic Constraints
- URL: http://arxiv.org/abs/2511.19156v2
- Date: Sat, 29 Nov 2025 08:37:59 GMT
- Title: Information Physics of Intelligence: Unifying Logical Depth and Entropy under Thermodynamic Constraints
- Authors: Jianfeng Xu, Zeyan Li,
- Abstract summary: We propose a theoretical framework that treats information processing as an enabling mapping from ontological states to carrier states.<n>We introduce a novel metric, Derivation Entropy, which quantifies the effective work required to compute a target state from a given logical depth.<n>Our findings suggest that the minimization of Derivation Entropy is a governing principle for the evolution of both biological and artificial intelligence.
- Score: 7.411478588468014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid scaling of artificial intelligence models has revealed a fundamental tension between model capacity (storage) and inference efficiency (computation). While classical information theory focuses on transmission and storage limits, it lacks a unified physical framework to quantify the thermodynamic costs of generating information from compressed laws versus retrieving it from memory. In this paper, we propose a theoretical framework that treats information processing as an enabling mapping from ontological states to carrier states. We introduce a novel metric, Derivation Entropy, which quantifies the effective work required to compute a target state from a given logical depth. By analyzing the interplay between Shannon entropy (storage) and computational complexity (time/energy), we demonstrate the existence of a critical phase transition point. Below this threshold, memory retrieval is thermodynamically favorable; above it, generative computation becomes the optimal strategy. This "Energy-Time-Space" conservation law provides a physical explanation for the efficiency of generative models and offers a rigorous mathematical bound for designing next-generation, energy-efficient AI architectures. Our findings suggest that the minimization of Derivation Entropy is a governing principle for the evolution of both biological and artificial intelligence.
Related papers
- Toward a Physical Theory of Intelligence [0.016144088896423884]
We present a theory of intelligence grounded in irreversible information processing in systems constrained by conservation laws.<n>An intelligent system is modelled as a coupled agent-environment process whose evolution transforms information into goal-directed work.
arXiv Detail & Related papers (2025-12-22T20:40:27Z) - Erasure cost of a quantum process: A thermodynamic meaning of the dynamical min-entropy [1.1827829754757404]
We investigate the thermodynamic costs associated with erasing (and preparing) quantum processes.<n>We focus on the adversarial erasure cost of the reduced dynamics.<n>This insight bridges thermodynamics, information theory, and the fundamental limits of quantum computation.
arXiv Detail & Related papers (2025-06-05T17:53:17Z) - Entropy-Based Block Pruning for Efficient Large Language Models [81.18339597023187]
We propose an entropy-based pruning strategy to enhance efficiency while maintaining performance.<n> Empirical analysis reveals that the entropy of hidden representations decreases in the early blocks but progressively increases across most subsequent blocks.
arXiv Detail & Related papers (2025-04-04T03:42:34Z) - Thermodynamic bounds on energy use in Deep Neural Networks [0.0]
We show that Deep Neural Networks (DNNs) implemented on analog physical substrates can operate under markedly different thermodynamic constraints.<n>We distinguish between two classes of analog systems: dynamic and quasi-static.<n>Our results suggest that while analog implementations can outperform digital ones during inference, the thermodynamic cost of training scales similarly in both paradigms.
arXiv Detail & Related papers (2025-03-13T02:35:07Z) - Informational Embodiment: Computational role of information structure in codes and robots [48.00447230721026]
We address an information theory (IT) account on how the precision of sensors, the accuracy of motors, their placement, the body geometry, shape the information structure in robots and computational codes.
We envision the robot's body as a physical communication channel through which information is conveyed, in and out, despite intrinsic noise and material limitations.
We introduce a special class of efficient codes used in IT that reached the Shannon limits in terms of information capacity for error correction and robustness against noise, and parsimony.
arXiv Detail & Related papers (2024-08-23T09:59:45Z) - Topology Optimization of Random Memristors for Input-Aware Dynamic SNN [44.38472635536787]
We introduce pruning optimization for input-aware dynamic memristive spiking neural network (PRIME)
Signal representation-wise, PRIME employs leaky integrate-and-fire neurons to emulate the brain's inherent spiking mechanism.
For reconfigurability, inspired by the brain's dynamic adjustment of computational depth, PRIME employs an input-aware dynamic early stop policy.
arXiv Detail & Related papers (2024-07-26T09:35:02Z) - Solving reaction dynamics with quantum computing algorithms [42.408991654684876]
We study quantum algorithms for response functions, relevant for describing different reactions governed by linear response.<n>We focus on nuclear-physics applications and consider a qubit-efficient mapping on the lattice, which can efficiently represent the large volumes required for realistic scattering simulations.
arXiv Detail & Related papers (2024-03-30T00:21:46Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Energy-frugal and Interpretable AI Hardware Design using Learning
Automata [5.514795777097036]
A new machine learning algorithm, called the Tsetlin machine, has been proposed.
In this paper, we investigate methods of energy-frugal artificial intelligence hardware design.
We show that frugal resource allocation can provide decisive energy reduction while also achieving robust and interpretable learning.
arXiv Detail & Related papers (2023-05-19T15:11:18Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - Quantum Foundations of Classical Reversible Computing [0.0]
reversible computing is capable of circumventing the thermodynamic limits to the energy efficiency of the conventional, non-reversible digital paradigm.
We use the framework of Gorini-Kossakowski-Sudarshan-Lindblad dynamics (a.k.a Lindbladians) with multiple states, incorporating recent results from resource theory, full counting statistics, and reversible thermodynamics.
We also outline a research plan for identifying the fundamental minimum energy dissipation of computing machines as a function of speed.
arXiv Detail & Related papers (2021-04-30T19:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.