Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
- URL: http://arxiv.org/abs/2404.03828v2
- Date: Wed, 26 Jun 2024 20:50:18 GMT
- Title: Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
- Authors: Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Robin Luo, Hong-Yu Chen, Weijian Li, Wei-Po Wang, Han Liu,
- Abstract summary: We introduce an Outlier-Efficient Modern Hopfield Model (termed $mathrmOutEffHop$)
Our main contribution is a novel associative memory model facilitating textitoutlier-efficient associative memory retrievals.
$mathrmOutEffHop$ achieves an average reduction of 22+% in average kurtosis and 26+% in the maximum infinity norm of model outputs.
- Score: 10.972020273638066
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce an Outlier-Efficient Modern Hopfield Model (termed $\mathrm{OutEffHop}$) and use it to address the outlier inefficiency problem of {training} gigantic transformer-based models. Our main contribution is a novel associative memory model facilitating \textit{outlier-efficient} associative memory retrievals. Interestingly, this memory model manifests a model-based interpretation of an outlier-efficient attention mechanism (${\rm Softmax}_1$): it is an approximation of the memory retrieval process of $\mathrm{OutEffHop}$. Methodologically, this allows us to introduce novel outlier-efficient Hopfield layers as powerful alternatives to traditional attention mechanisms, with superior post-quantization performance. Theoretically, the Outlier-Efficient Modern Hopfield Model retains and improves the desirable properties of standard modern Hopfield models, including fixed point convergence and exponential storage capacity. Empirically, we demonstrate the efficacy of the proposed model across large-scale transformer-based and Hopfield-based models (including BERT, OPT, ViT, and STanHop-Net), benchmarking against state-of-the-art methods like $\mathtt{Clipped\_Softmax}$ and $\mathtt{Gated\_Attention}$. Notably, $\mathrm{OutEffHop}$ achieves an average reduction of 22+\% in average kurtosis and 26+\% in the maximum infinity norm of model outputs across four models. Code is available at \href{https://github.com/MAGICS-LAB/OutEffHop}{GitHub}; models are on \href{https://huggingface.co/collections/magicslabnu/outeffhop-6610fcede8d2cda23009a98f}{Hugging Face Hub}; future updates are on \href{https://arxiv.org/abs/2404.03828}{arXiv}.
Related papers
- TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters [102.1116808722299]
We introduce TokenFormer, a scalable architecture for scaling Transformers.
By treating model parameters as tokens, we replace all the linear projections in Transformers.
Our model scales from 124M to 1.4B parameters by incrementally adding new key-value parameter pairs.
arXiv Detail & Related papers (2024-10-30T16:19:00Z) - Nonparametric Modern Hopfield Models [12.160725212848137]
We present a nonparametric construction for deep learning compatible modern Hopfield models.
Key contribution stems from interpreting the memory storage and retrieval processes in modern Hopfield models.
We introduce textitsparse-structured modern Hopfield models with sub-quadratic complexity.
arXiv Detail & Related papers (2024-04-05T05:46:20Z) - Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models [5.929540708452128]
We propose a two-stage memory retrieval dynamics for modern Hopfield models.
Key contribution is a learnable feature map $Phi$ which transforms the Hopfield energy function into kernel space.
It utilizes the stored memory patterns as learning data to enhance memory capacity across all modern Hopfield models.
arXiv Detail & Related papers (2024-04-04T23:05:30Z) - Towards Model-Agnostic Posterior Approximation for Fast and Accurate Variational Autoencoders [22.77397537980102]
We show that we can compute a deterministic, model-agnostic posterior approximation (MAPA) of the true model's posterior.
We present preliminary results on low-dimensional synthetic data that (1) MAPA captures the trend of the true posterior, and (2) our MAPA-based inference performs better density estimation with less computation than baselines.
arXiv Detail & Related papers (2024-03-13T20:16:21Z) - On Computational Limits of Modern Hopfield Models: A Fine-Grained Complexity Analysis [12.72277128564391]
We investigate the computational limits of the memory retrieval dynamics of modern Hopfield models.
We establish an upper bound criterion for the norm of input query patterns and memory patterns.
We prove its memory retrieval error bound and exponential memory capacity.
arXiv Detail & Related papers (2024-02-07T01:58:21Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - Storage and Learning phase transitions in the Random-Features Hopfield
Model [9.489398590336643]
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities.
Inspired by the manifold hypothesis in machine learning, we propose and investigate a generalization of the standard setting that we name Random-Features Hopfield Model.
arXiv Detail & Related papers (2023-03-29T17:39:21Z) - Heterogenous Ensemble of Models for Molecular Property Prediction [55.91865861896012]
We propose a method for considering different modalities on molecules.
We ensemble these models with a HuberRegressor.
This yields a winning solution to the 2textsuperscriptnd edition of the OGB Large-Scale Challenge (2022)
arXiv Detail & Related papers (2022-11-20T17:25:26Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Neural Pseudo-Label Optimism for the Bank Loan Problem [78.66533961716728]
We study a class of classification problems best exemplified by the emphbank loan problem.
In the case of linear models, this issue can be addressed by adding optimism directly into the model predictions.
We present Pseudo-Label Optimism (PLOT), a conceptually and computationally simple method for this setting applicable to Deep Neural Networks.
arXiv Detail & Related papers (2021-12-03T22:46:31Z) - Model-Based Reinforcement Learning with Value-Targeted Regression [48.92439657407732]
We focus on finite-horizon episodic RL where the transition model $P$ belongs to a known family of models $mathcalP$.
We derive a bound on the regret, which, in the special case of linear mixtures, the regret bound takes the form $tildemathcalO(dsqrtH3T)$.
arXiv Detail & Related papers (2020-06-01T17:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.