Supervised and Unsupervised protocols for hetero-associative neural networks
- URL: http://arxiv.org/abs/2505.18796v1
- Date: Sat, 24 May 2025 17:15:55 GMT
- Title: Supervised and Unsupervised protocols for hetero-associative neural networks
- Authors: Andrea Alessandrelli, Adriano Barra, Andrea Ladiana, Andrea Lepre, Federico Ricci-Tersenghi,
- Abstract summary: This paper introduces a learning framework for Three-Directional Associative Memory (TAM) models, extending the classical Hebbian paradigm to both supervised and unsupervised protocols within an hetero-associative setting.<n>These neural networks consist of three interconnected layers of binary neurons interacting via generalized Hebbian synaptic couplings that allow learning, storage and retrieval of structured triplets of patterns.
- Score: 0.8796261172196742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a learning framework for Three-Directional Associative Memory (TAM) models, extending the classical Hebbian paradigm to both supervised and unsupervised protocols within an hetero-associative setting. These neural networks consist of three interconnected layers of binary neurons interacting via generalized Hebbian synaptic couplings that allow learning, storage and retrieval of structured triplets of patterns. By relying upon glassy statistical mechanical techniques (mainly replica theory and Guerra interpolation), we analyze the emergent computational properties of these networks, at work with random (Rademacher) datasets and at the replica-symmetric level of description: we obtain a set of self-consistency equations for the order parameters that quantify the critical dataset sizes (i.e. their thresholds for learning) and describe the retrieval performance of these networks, highlighting the differences between supervised and unsupervised protocols. Numerical simulations validate our theoretical findings and demonstrate the robustness of the captured picture about TAMs also at work with structured datasets. In particular, this study provides insights into the cooperative interplay of layers, beyond that of the neurons within the layers, with potential implications for optimal design of artificial neural network architectures.
Related papers
- Concept-Guided Interpretability via Neural Chunking [54.73787666584143]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract these emerging entities, complementing each other based on label availability and dimensionality.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Structured Neural Networks for Density Estimation and Causal Inference [15.63518195860946]
Injecting structure into neural networks enables learning functions that satisfy invariances with respect to subsets of inputs.
We propose the Structured Neural Network (StrNN), which injects structure through masking pathways in a neural network.
arXiv Detail & Related papers (2023-11-03T20:15:05Z) - Memorization With Neural Nets: Going Beyond the Worst Case [5.03863830033243]
In practice, deep neural networks are often able to easily interpolate their training data.<n>We introduce a simple randomized algorithm that constructs an interpolating three-layer neural network in time.<n>We obtain guarantees that are independent of the number of samples and hence move beyond worst-case memorization capacity bounds.
arXiv Detail & Related papers (2023-09-30T10:06:05Z) - Dense Hebbian neural networks: a replica symmetric picture of supervised
learning [4.133728123207142]
We consider dense, associative neural-networks trained by a teacher with supervision.
We investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations.
arXiv Detail & Related papers (2022-11-25T13:37:47Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Correlator Convolutional Neural Networks: An Interpretable Architecture
for Image-like Quantum Matter Data [15.283214387433082]
We develop a network architecture that discovers features in the data which are directly interpretable in terms of physical observables.
Our approach lends itself well to the construction of simple, end-to-end interpretable architectures.
arXiv Detail & Related papers (2020-11-06T17:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.