Simplicial Hopfield networks
- URL: http://arxiv.org/abs/2305.05179v1
- Date: Tue, 9 May 2023 05:23:04 GMT
- Title: Simplicial Hopfield networks
- Authors: Thomas F Burns, Tomoki Fukai
- Abstract summary: We extend Hopfield networks by adding setwise connections and embedding these connections in a simplicial complex.
We show that our simplicial Hopfield networks increase memory storage capacity.
We also test analogous modern continuous Hopfield networks, offering a potentially promising avenue for improving the attention mechanism in Transformer models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hopfield networks are artificial neural networks which store memory patterns
on the states of their neurons by choosing recurrent connection weights and
update rules such that the energy landscape of the network forms attractors
around the memories. How many stable, sufficiently-attracting memory patterns
can we store in such a network using $N$ neurons? The answer depends on the
choice of weights and update rule. Inspired by setwise connectivity in biology,
we extend Hopfield networks by adding setwise connections and embedding these
connections in a simplicial complex. Simplicial complexes are higher
dimensional analogues of graphs which naturally represent collections of
pairwise and setwise relationships. We show that our simplicial Hopfield
networks increase memory storage capacity. Surprisingly, even when connections
are limited to a small random subset of equivalent size to an all-pairwise
network, our networks still outperform their pairwise counterparts. Such
scenarios include non-trivial simplicial topology. We also test analogous
modern continuous Hopfield networks, offering a potentially promising avenue
for improving the attention mechanism in Transformer models.
Related papers
- Dense Associative Memory Through the Lens of Random Features [48.17520168244209]
Dense Associative Memories are high storage capacity variants of the Hopfield networks.
We show that this novel network closely approximates the energy function and dynamics of conventional Dense Associative Memories.
arXiv Detail & Related papers (2024-10-31T17:10:57Z) - Towards Explaining Hypercomplex Neural Networks [6.543091030789653]
Hypercomplex neural networks are gaining increasing interest in the deep learning community.
In this paper, we propose inherently interpretable PHNNs and quaternion-like networks.
We draw insights into how this unique branch of neural models operates.
arXiv Detail & Related papers (2024-03-26T17:58:07Z) - Parameter-Efficient Masking Networks [61.43995077575439]
Advanced network designs often contain a large number of repetitive structures (e.g., Transformer)
In this study, we are the first to investigate the representative potential of fixed random weights with limited unique values by learning masks.
It leads to a new paradigm for model compression to diminish the model size.
arXiv Detail & Related papers (2022-10-13T03:39:03Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Recurrent neural networks that generalize from examples and optimize by
dreaming [0.0]
We introduce a generalized Hopfield network where pairwise couplings between neurons are built according to Hebb's prescription for on-line learning.
We let the network experience solely a dataset made of a sample of noisy examples for each pattern.
Remarkably, the sleeping mechanisms always significantly reduce the dataset size required to correctly generalize.
arXiv Detail & Related papers (2022-04-17T08:40:54Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z) - Hierarchical Associative Memory [2.66512000865131]
Associative Memories or Modern Hopfield Networks have many appealing properties.
They can do pattern completion, store a large number of memories, and can be described using a recurrent neural network.
This paper tackles a gap and describes a fully recurrent model of associative memory with an arbitrary large number of layers.
arXiv Detail & Related papers (2021-07-14T01:38:40Z) - New Insights on Learning Rules for Hopfield Networks: Memory and
Objective Function Minimisation [1.7006003864727404]
We take a new look at learning rules, exhibiting them as descent-type algorithms for various cost functions.
We discuss the role of biases (the external inputs) in the learning process in Hopfield networks.
arXiv Detail & Related papers (2020-10-04T03:02:40Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.