Invariants for neural automata
- URL: http://arxiv.org/abs/2302.02149v1
- Date: Sat, 4 Feb 2023 11:40:40 GMT
- Title: Invariants for neural automata
- Authors: Jone Uria-Albizuri, Giovanni Sirio Carmantini, Peter beim Graben,
Serafim Rodrigues
- Abstract summary: We develop a formal framework for the investigation of symmetries and invariants of neural automata under different encodings.
Our work could be of substantial importance for related regression studies of real-world measurements with neurosymbolic processors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational modeling of neurodynamical systems often deploys neural
networks and symbolic dynamics. A particular way for combining these approaches
within a framework called vector symbolic architectures leads to neural
automata. An interesting research direction we have pursued under this
framework has been to consider mapping symbolic dynamics onto neurodynamics,
represented as neural automata. This representation theory, enables us to ask
questions, such as, how does the brain implement Turing computations.
Specifically, in this representation theory, neural automata result from the
assignment of symbols and symbol strings to numbers, known as G\"odel encoding.
Under this assignment symbolic computation becomes represented by trajectories
of state vectors in a real phase space, that allows for statistical correlation
analyses with real-world measurements and experimental data. However, these
assignments are usually completely arbitrary. Hence, it makes sense to address
the problem question of, which aspects of the dynamics observed under such a
representation is intrinsic to the dynamics and which are not. In this study,
we develop a formally rigorous mathematical framework for the investigation of
symmetries and invariants of neural automata under different encodings. As a
central concept we define patterns of equality for such systems. We consider
different macroscopic observables, such as the mean activation level of the
neural network, and ask for their invariance properties. Our main result shows
that only step functions that are defined over those patterns of equality are
invariant under recodings, while the mean activation is not. Our work could be
of substantial importance for related regression studies of real-world
measurements with neurosymbolic processors for avoiding confounding results
that are dependant on a particular encoding and not intrinsic to the dynamics.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Neural Symbolic Regression of Complex Network Dynamics [28.356824329954495]
We propose Physically Inspired Neural Dynamics Regression (PI-NDSR) to automatically learn the symbolic expression of dynamics.
We evaluate our method on synthetic datasets generated by various dynamics and real datasets on disease spreading.
arXiv Detail & Related papers (2024-10-15T02:02:30Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Inferring Inference [7.11780383076327]
We develop a framework for inferring canonical distributed computations from large-scale neural activity patterns.
We simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model.
Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
arXiv Detail & Related papers (2023-10-04T22:12:11Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Representational dissimilarity metric spaces for stochastic neural
networks [4.229248343585332]
Quantifying similarity between neural representations is a perennial problem in deep learning and neuroscience research.
We generalize shape metrics to quantify differences in representations.
We find that neurobiological oriented visual gratings and naturalistic scenes respectively resemble untrained and trained deep network representations.
arXiv Detail & Related papers (2022-11-21T17:32:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Formalising the Use of the Activation Function in Neural Inference [0.0]
We discuss how a spike in a biological neurone belongs to a particular class of phase transitions in statistical physics.
We show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics.
This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning.
arXiv Detail & Related papers (2021-02-02T19:42:21Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.