Sparse and Structured Hopfield Networks
- URL: http://arxiv.org/abs/2402.13725v2
- Date: Tue, 4 Jun 2024 22:04:40 GMT
- Title: Sparse and Structured Hopfield Networks
- Authors: Saul Santos, Vlad Niculae, Daniel McNamee, Andre F. T. Martins,
- Abstract summary: We provide a unified framework for sparse Hopfield networks by establishing a link with Fenchel-Young losses.
We reveal a connection between loss margins, sparsity, and exact memory retrieval.
Experiments on multiple instance learning and text rationalization demonstrate the usefulness of our approach.
- Score: 14.381907888022612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern Hopfield networks have enjoyed recent interest due to their connection to attention in transformers. Our paper provides a unified framework for sparse Hopfield networks by establishing a link with Fenchel-Young losses. The result is a new family of Hopfield-Fenchel-Young energies whose update rules are end-to-end differentiable sparse transformations. We reveal a connection between loss margins, sparsity, and exact memory retrieval. We further extend this framework to structured Hopfield networks via the SparseMAP transformation, which can retrieve pattern associations instead of a single pattern. Experiments on multiple instance learning and text rationalization demonstrate the usefulness of our approach.
Related papers
- Hopfield-Fenchel-Young Networks: A Unified Framework for Associative Memory Retrieval [25.841394444834933]
Associative memory models, such as Hopfield networks, have garnered renewed interest due to advancements in memory capacity and connections with self-attention in transformers.
In this work, we introduce a unified framework-Hopfield-Fenchel-Young networks-which generalizes these models to a broader family of energy functions.
arXiv Detail & Related papers (2024-11-13T13:13:07Z) - Nonparametric Modern Hopfield Models [12.160725212848137]
We present a nonparametric construction for deep learning compatible modern Hopfield models.
Key contribution stems from interpreting the memory storage and retrieval processes in modern Hopfield models.
We introduce textitsparse-structured modern Hopfield models with sub-quadratic complexity.
arXiv Detail & Related papers (2024-04-05T05:46:20Z) - STanHop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series
Prediction [13.815793371488613]
We present a novel Hopfield-based neural network block, which sparsely learns and stores both temporal and cross-series representations.
In essence, STanHop sequentially learn temporal representation and cross-series representation using two tandem sparse Hopfield layers.
We show that our framework endows a tighter memory retrieval error compared to the dense counterpart without sacrificing memory capacity.
arXiv Detail & Related papers (2023-12-28T20:26:23Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - Simplicial Hopfield networks [0.0]
We extend Hopfield networks by adding setwise connections and embedding these connections in a simplicial complex.
We show that our simplicial Hopfield networks increase memory storage capacity.
We also test analogous modern continuous Hopfield networks, offering a potentially promising avenue for improving the attention mechanism in Transformer models.
arXiv Detail & Related papers (2023-05-09T05:23:04Z) - Semantic Correspondence with Transformers [68.37049687360705]
We propose Cost Aggregation with Transformers (CATs) to find dense correspondences between semantically similar images.
We include appearance affinity modelling to disambiguate the initial correlation maps and multi-level aggregation.
We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies.
arXiv Detail & Related papers (2021-06-04T14:39:03Z) - InverseForm: A Loss Function for Structured Boundary-Aware Segmentation [80.39674800972182]
We present a novel boundary-aware loss term for semantic segmentation using an inverse-transformation network.
This plug-in loss term complements the cross-entropy loss in capturing boundary transformations.
We analyze the quantitative and qualitative effects of our loss function on three indoor and outdoor segmentation benchmarks.
arXiv Detail & Related papers (2021-04-06T18:52:45Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Hopfield Networks is All You Need [8.508381229662907]
We introduce a modern Hopfield network with continuous states and a corresponding update rule.
The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors.
We demonstrate the broad applicability of the Hopfield layers across various domains.
arXiv Detail & Related papers (2020-07-16T17:52:37Z) - MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning [90.20563679417567]
This paper introduces MDP homomorphic networks for deep reinforcement learning.
MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP.
We show that such networks converge faster than unstructured networks on CartPole, a grid world and Pong.
arXiv Detail & Related papers (2020-06-30T15:38:37Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.