Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
- URL: http://arxiv.org/abs/2404.03828v2
- Date: Wed, 26 Jun 2024 20:50:18 GMT
- Title: Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
- Authors: Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Robin Luo, Hong-Yu Chen, Weijian Li, Wei-Po Wang, Han Liu,
- Abstract summary: We introduce an Outlier-Efficient Modern Hopfield Model (termed $mathrmOutEffHop$)
Our main contribution is a novel associative memory model facilitating textitoutlier-efficient associative memory retrievals.
$mathrmOutEffHop$ achieves an average reduction of 22+% in average kurtosis and 26+% in the maximum infinity norm of model outputs.
- Score: 10.972020273638066
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce an Outlier-Efficient Modern Hopfield Model (termed $\mathrm{OutEffHop}$) and use it to address the outlier inefficiency problem of {training} gigantic transformer-based models. Our main contribution is a novel associative memory model facilitating \textit{outlier-efficient} associative memory retrievals. Interestingly, this memory model manifests a model-based interpretation of an outlier-efficient attention mechanism (${\rm Softmax}_1$): it is an approximation of the memory retrieval process of $\mathrm{OutEffHop}$. Methodologically, this allows us to introduce novel outlier-efficient Hopfield layers as powerful alternatives to traditional attention mechanisms, with superior post-quantization performance. Theoretically, the Outlier-Efficient Modern Hopfield Model retains and improves the desirable properties of standard modern Hopfield models, including fixed point convergence and exponential storage capacity. Empirically, we demonstrate the efficacy of the proposed model across large-scale transformer-based and Hopfield-based models (including BERT, OPT, ViT, and STanHop-Net), benchmarking against state-of-the-art methods like $\mathtt{Clipped\_Softmax}$ and $\mathtt{Gated\_Attention}$. Notably, $\mathrm{OutEffHop}$ achieves an average reduction of 22+\% in average kurtosis and 26+\% in the maximum infinity norm of model outputs across four models. Code is available at \href{https://github.com/MAGICS-LAB/OutEffHop}{GitHub}; models are on \href{https://huggingface.co/collections/magicslabnu/outeffhop-6610fcede8d2cda23009a98f}{Hugging Face Hub}; future updates are on \href{https://arxiv.org/abs/2404.03828}{arXiv}.
Related papers
- MM-RLHF: The Next Step Forward in Multimodal LLM Alignment [59.536850459059856]
We introduce MM-RLHF, a dataset containing $mathbf120k$ fine-grained, human-annotated preference comparison pairs.
We propose several key innovations to improve the quality of reward models and the efficiency of alignment algorithms.
Our approach is rigorously evaluated across $mathbf10$ distinct dimensions and $mathbf27$ benchmarks.
arXiv Detail & Related papers (2025-02-14T18:59:51Z) - FuXi-$α$: Scaling Recommendation Model with Feature Interaction Enhanced Transformer [81.12174905444229]
Recent advancements have shown that expanding sequential recommendation models to large-scale recommendation models can be an effective strategy.
We propose a new model called FuXi-$alpha$ to address these issues.
Our model outperforms existing models, with its performance continuously improving as the model size increases.
arXiv Detail & Related papers (2025-02-05T09:46:54Z) - TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters [102.1116808722299]
We introduce TokenFormer, a scalable architecture for scaling Transformers.
By treating model parameters as tokens, we replace all the linear projections in Transformers.
Our model scales from 124M to 1.4B parameters by incrementally adding new key-value parameter pairs.
arXiv Detail & Related papers (2024-10-30T16:19:00Z) - Nonparametric Modern Hopfield Models [12.160725212848137]
We present a nonparametric construction for deep learning compatible modern Hopfield models.
Key contribution stems from interpreting the memory storage and retrieval processes in modern Hopfield models.
We introduce textitsparse-structured modern Hopfield models with sub-quadratic complexity.
arXiv Detail & Related papers (2024-04-05T05:46:20Z) - Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models [5.929540708452128]
We propose a two-stage memory retrieval dynamics for modern Hopfield models.
Key contribution is a learnable feature map $Phi$ which transforms the Hopfield energy function into kernel space.
It utilizes the stored memory patterns as learning data to enhance memory capacity across all modern Hopfield models.
arXiv Detail & Related papers (2024-04-04T23:05:30Z) - On Computational Limits of Modern Hopfield Models: A Fine-Grained Complexity Analysis [12.72277128564391]
We investigate the computational limits of the memory retrieval dynamics of modern Hopfield models.
We establish an upper bound criterion for the norm of input query patterns and memory patterns.
We prove its memory retrieval error bound and exponential memory capacity.
arXiv Detail & Related papers (2024-02-07T01:58:21Z) - Storage and Learning phase transitions in the Random-Features Hopfield
Model [9.489398590336643]
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities.
Inspired by the manifold hypothesis in machine learning, we propose and investigate a generalization of the standard setting that we name Random-Features Hopfield Model.
arXiv Detail & Related papers (2023-03-29T17:39:21Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Neural Pseudo-Label Optimism for the Bank Loan Problem [78.66533961716728]
We study a class of classification problems best exemplified by the emphbank loan problem.
In the case of linear models, this issue can be addressed by adding optimism directly into the model predictions.
We present Pseudo-Label Optimism (PLOT), a conceptually and computationally simple method for this setting applicable to Deep Neural Networks.
arXiv Detail & Related papers (2021-12-03T22:46:31Z) - Model-Based Reinforcement Learning with Value-Targeted Regression [48.92439657407732]
We focus on finite-horizon episodic RL where the transition model $P$ belongs to a known family of models $mathcalP$.
We derive a bound on the regret, which, in the special case of linear mixtures, the regret bound takes the form $tildemathcalO(dsqrtH3T)$.
arXiv Detail & Related papers (2020-06-01T17:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.