"Efficient Complexity": a Constrained Optimization Approach to the Evolution of Natural Intelligence
- URL: http://arxiv.org/abs/2410.13881v2
- Date: Sat, 28 Dec 2024 07:30:30 GMT
- Title: "Efficient Complexity": a Constrained Optimization Approach to the Evolution of Natural Intelligence
- Authors: Serge Dolgikh,
- Abstract summary: A fundamental question in the conjunction of information theory, biophysics, bioinformatics and thermodynamics relates to the principles and processes that guide the development of natural intelligence in natural environments where information about external stimuli may not be available at prior.<n>A novel approach in the description of the information processes of natural learning is proposed in the framework of constrained optimization.<n>Non-trivial conclusions on the relationships between the complexity, variability and efficiency of the structure, or architecture of learning models made on the basis of the proposed formalism can explain the effectiveness of neural networks as collaborative groups of small intelligent units in biological and artificial intelligence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental question in the conjunction of information theory, biophysics, bioinformatics and thermodynamics relates to the principles and processes that guide the development of natural intelligence in natural environments where information about external stimuli may not be available at prior. A novel approach in the description of the information processes of natural learning is proposed in the framework of constrained optimization, where the objective function represented by the information entropy of the internal states of the system with the states of the external environment is maximized under the natural constraints of memory, computing power, energy and other essential resources. The progress of natural intelligence can be interpreted in this framework as a strategy of approximation of the solutions of the optimization problem via a traversal over the extrema network of the objective function under the natural constraints that were examined and described. Non-trivial conclusions on the relationships between the complexity, variability and efficiency of the structure, or architecture of learning models made on the basis of the proposed formalism can explain the effectiveness of neural networks as collaborative groups of small intelligent units in biological and artificial intelligence.
Related papers
- Structured Knowledge Accumulation: An Autonomous Framework for Layer-Wise Entropy Reduction in Neural Learning [0.0]
We introduce the Structured Knowledge Accumulation (SKA) framework, which reinterprets entropy as a dynamic, layer-wise measure of knowledge alignment in neural networks.
SKA defines entropy in terms of knowledge vectors and their influence on decision probabilities across multiple layers.
This approach provides a scalable, biologically plausible alternative to gradient-based learning, bridging information theory and artificial intelligence.
arXiv Detail & Related papers (2025-03-18T06:14:20Z) - Equation discovery framework EPDE: Towards a better equation discovery [50.79602839359522]
We enhance the EPDE algorithm -- an evolutionary optimization-based discovery framework.
Our approach generates terms using fundamental building blocks such as elementary functions and individual differentials.
We validate our algorithm's noise resilience and overall performance by comparing its results with those from the state-of-the-art equation discovery framework SINDy.
arXiv Detail & Related papers (2024-12-28T15:58:44Z) - What should a neuron aim for? Designing local objective functions based on information theory [41.39714023784306]
We show how self-organized artificial neurons can be achieved by designing bio-inspired local learning goals.
These goals are parameterized using a recent extension of information theory, Partial Information Decomposition.
Our work advances a principled information-theoretic foundation for local learning strategies.
arXiv Detail & Related papers (2024-12-03T14:45:46Z) - Rethinking Cognition: Morphological Info-Computation and the Embodied Paradigm in Life and Artificial Intelligence [1.14219428942199]
This study aims to place Lorenzo Magnanis Eco-Cognitive Computationalism within the broader context of current work on information, computation, and cognition.
We model cognition as a web of concurrent morphological computations, driven by processes of self-assembly, self-organisation, and autopoiesis across physical, chemical, and biological domains.
arXiv Detail & Related papers (2024-12-01T10:04:53Z) - GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - The Origin and Evolution of Information Handling [0.6963971634605796]
Information-first approach integrates Hofmeyr's (F, A)-systems with temporal parametrization and multiscale causality.
Our model traces the evolution of information handling from simple reaction networks that recognize regular languages to self-replicating chemical systems with memory and anticipatory capabilities.
arXiv Detail & Related papers (2024-04-05T19:35:38Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Nature-Inspired Local Propagation [68.63385571967267]
Natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way as to respect locality.
We show that the algorithmic interpretation of the derived "laws of learning", which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity.
This opens the doors to machine learning based on full on-line information that are based the replacement of Backpropagation with the proposed local algorithm.
arXiv Detail & Related papers (2024-02-04T21:43:37Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Balancing Explainability-Accuracy of Complex Models [8.402048778245165]
We introduce a new approach for complex models based on the co-relation impact.
We propose approaches for both scenarios of independent features and dependent features.
We provide an upper bound of the complexity of our proposed approach for the dependent features.
arXiv Detail & Related papers (2023-05-23T14:20:38Z) - Scalable Coupling of Deep Learning with Logical Reasoning [0.0]
We introduce a scalable neural architecture and loss function dedicated to learning the constraints and criteria of NP-hard reasoning problems.
Our loss function solves one of the main limitations of Besag's pseudo-loglikelihood, enabling learning of high energies.
arXiv Detail & Related papers (2023-05-12T17:09:34Z) - Building artificial neural circuits for domain-general cognition: a
primer on brain-inspired systems-level architecture [0.0]
We provide an overview of the hallmarks endowing biological neural networks with the functionality needed for flexible cognition.
As machine learning models become more complex, these principles may provide valuable directions in an otherwise vast space of possible architectures.
arXiv Detail & Related papers (2023-03-21T18:36:17Z) - Objective discovery of dominant dynamical processes with intelligible
machine learning [0.0]
We present a formal definition in which the identification of dynamical regimes is formulated as an optimization problem.
We propose an unsupervised learning framework which eliminates the need for a priori knowledge and ad hoc definitions.
Our method is a step towards unbiased data exploration that allows serendipitous discovery within dynamical systems.
arXiv Detail & Related papers (2021-06-21T20:57:23Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.