The brain as a probabilistic transducer: an evolutionarily plausible
network architecture for knowledge representation, computation, and behavior
- URL: http://arxiv.org/abs/2112.13388v1
- Date: Sun, 26 Dec 2021 14:37:47 GMT
- Title: The brain as a probabilistic transducer: an evolutionarily plausible
network architecture for knowledge representation, computation, and behavior
- Authors: Joseph Y. Halpern and Arnon Lotem
- Abstract summary: We offer a general theoretical framework for brain and behavior that is evolutionarily and computationally plausible.
The brain in our abstract model is a network of nodes and edges. Both nodes and edges in our network have weights and activation levels.
By specifying the innate (genetic) components of the network, we show how evolution could endow the network with initial adaptive rules and goals that are then enriched through learning.
- Score: 14.505867475659274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We offer a general theoretical framework for brain and behavior that is
evolutionarily and computationally plausible. The brain in our abstract model
is a network of nodes and edges. Although it has some similarities to standard
neural network models, as we show, there are some significant differences. Both
nodes and edges in our network have weights and activation levels. They act as
probabilistic transducers that use a set of relatively simple rules to
determine how activation levels and weights are affected by input, generate
output, and affect each other. We show that these simple rules enable a
learning process that allows the network to represent increasingly complex
knowledge, and simultaneously to act as a computing device that facilitates
planning, decision-making, and the execution of behavior. By specifying the
innate (genetic) components of the network, we show how evolution could endow
the network with initial adaptive rules and goals that are then enriched
through learning. We demonstrate how the developing structure of the network
(which determines what the brain can do and how well) is critically affected by
the co-evolved coordination between the mechanisms affecting the distribution
of data input and those determining the learning parameters (used in the
programs run by nodes and edges). Finally, we consider how the model accounts
for various findings in the field of learning and decision making, how it can
address some challenging problems in mind and behavior, such as those related
to setting goals and self-control, and how it can help understand some
cognitive disorders.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Meta Neural Coordination [0.0]
Meta-learning aims to develop algorithms that can learn from other learning algorithms to adapt to new and changing environments.
Uncertainty in the predictions of conventional deep neural networks highlights the partial predictability of the world.
We discuss the potential advancements required to build biologically-inspired machine intelligence.
arXiv Detail & Related papers (2023-05-20T06:06:44Z) - The Neural Race Reduction: Dynamics of Abstraction in Gated Networks [12.130628846129973]
We introduce the Gated Deep Linear Network framework that schematizes how pathways of information flow impact learning dynamics.
We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning.
Our work gives rise to general hypotheses relating neural architecture to learning and provides a mathematical approach towards understanding the design of more complex architectures.
arXiv Detail & Related papers (2022-07-21T12:01:03Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Invariance, encodings, and generalization: learning identity effects
with neural networks [0.0]
We provide a framework in which we can rigorously prove that algorithms satisfying simple criteria cannot make the correct inference.
We then show that a broad class of learning algorithms including deep feedforward neural networks trained via gradient-based algorithms satisfy our criteria.
In some broader circumstances we are able to provide adversarial examples that the network necessarily classifies incorrectly.
arXiv Detail & Related papers (2021-01-21T01:28:15Z) - Malicious Network Traffic Detection via Deep Learning: An Information
Theoretic View [0.0]
We study how homeomorphism affects learned representation of a malware traffic dataset.
Our results suggest that although the details of learned representations and the specific coordinate system defined over the manifold of all parameters differ slightly, the functional approximations are the same.
arXiv Detail & Related papers (2020-09-16T15:37:44Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Generalizing Outside the Training Set: When Can Neural Networks Learn
Identity Effects? [1.2891210250935143]
We show that a class of algorithms including deep neural networks with standard architecture and training with backpropagation can generalize to novel inputs.
We demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs.
arXiv Detail & Related papers (2020-05-09T01:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.