Invariants for neural automata
- URL: http://arxiv.org/abs/2302.02149v1
- Date: Sat, 4 Feb 2023 11:40:40 GMT
- Title: Invariants for neural automata
- Authors: Jone Uria-Albizuri, Giovanni Sirio Carmantini, Peter beim Graben,
Serafim Rodrigues
- Abstract summary: We develop a formal framework for the investigation of symmetries and invariants of neural automata under different encodings.
Our work could be of substantial importance for related regression studies of real-world measurements with neurosymbolic processors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational modeling of neurodynamical systems often deploys neural
networks and symbolic dynamics. A particular way for combining these approaches
within a framework called vector symbolic architectures leads to neural
automata. An interesting research direction we have pursued under this
framework has been to consider mapping symbolic dynamics onto neurodynamics,
represented as neural automata. This representation theory, enables us to ask
questions, such as, how does the brain implement Turing computations.
Specifically, in this representation theory, neural automata result from the
assignment of symbols and symbol strings to numbers, known as G\"odel encoding.
Under this assignment symbolic computation becomes represented by trajectories
of state vectors in a real phase space, that allows for statistical correlation
analyses with real-world measurements and experimental data. However, these
assignments are usually completely arbitrary. Hence, it makes sense to address
the problem question of, which aspects of the dynamics observed under such a
representation is intrinsic to the dynamics and which are not. In this study,
we develop a formally rigorous mathematical framework for the investigation of
symmetries and invariants of neural automata under different encodings. As a
central concept we define patterns of equality for such systems. We consider
different macroscopic observables, such as the mean activation level of the
neural network, and ask for their invariance properties. Our main result shows
that only step functions that are defined over those patterns of equality are
invariant under recodings, while the mean activation is not. Our work could be
of substantial importance for related regression studies of real-world
measurements with neurosymbolic processors for avoiding confounding results
that are dependant on a particular encoding and not intrinsic to the dynamics.
Related papers
- Emergent Symbol-like Number Variables in Artificial Neural Networks [34.388552536773034]
We show that artificial neural models do indeed develop analogs of interchangeable, mutable, latent number variables.
We then show how the symbol-like variables change over the course of training to find a strong correlation between the models' task performance and the alignment of their symbol-like representations.
Finally, we show that in all cases, some degree of gradience exists in these neural symbols, highlighting the difficulty of finding simple, interpretable symbolic stories of how neural networks perform numeric tasks.
arXiv Detail & Related papers (2025-01-10T18:03:46Z) - Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.
We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.
We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Neural Symbolic Regression of Complex Network Dynamics [28.356824329954495]
We propose Physically Inspired Neural Dynamics Regression (PI-NDSR) to automatically learn the symbolic expression of dynamics.
We evaluate our method on synthetic datasets generated by various dynamics and real datasets on disease spreading.
arXiv Detail & Related papers (2024-10-15T02:02:30Z) - Inferring Inference [7.11780383076327]
We develop a framework for inferring canonical distributed computations from large-scale neural activity patterns.
We simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model.
Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
arXiv Detail & Related papers (2023-10-04T22:12:11Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Representational dissimilarity metric spaces for stochastic neural
networks [4.229248343585332]
Quantifying similarity between neural representations is a perennial problem in deep learning and neuroscience research.
We generalize shape metrics to quantify differences in representations.
We find that neurobiological oriented visual gratings and naturalistic scenes respectively resemble untrained and trained deep network representations.
arXiv Detail & Related papers (2022-11-21T17:32:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.