A phase transition between positional and semantic learning in a
solvable model of dot-product attention
- URL: http://arxiv.org/abs/2402.03902v1
- Date: Tue, 6 Feb 2024 11:13:54 GMT
- Title: A phase transition between positional and semantic learning in a
solvable model of dot-product attention
- Authors: Hugo Cui, Freya Behrens, Florent Krzakala, Lenka Zdeborov\'a
- Abstract summary: We show how a dot-product attention layer learns a positional attention matrix and a semantic attention matrix.
For an algorithmic task, we experimentally show how the same simple architecture can learn using either the positional or semantic mechanism.
- Score: 20.83573496458023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate how a dot-product attention layer learns a positional
attention matrix (with tokens attending to each other based on their respective
positions) and a semantic attention matrix (with tokens attending to each other
based on their meaning). For an algorithmic task, we experimentally show how
the same simple architecture can learn to implement a solution using either the
positional or semantic mechanism. On the theoretical side, we study the
learning of a non-linear self-attention layer with trainable tied and low-rank
query and key matrices. In the asymptotic limit of high-dimensional data and a
comparably large number of training samples, we provide a closed-form
characterization of the global minimum of the non-convex empirical loss
landscape. We show that this minimum corresponds to either a positional or a
semantic mechanism and evidence an emergent phase transition from the former to
the latter with increasing sample complexity. Finally, we compare the
dot-product attention layer to linear positional baseline, and show that it
outperforms the latter using the semantic mechanism provided it has access to
sufficient data.
Related papers
- Transformers Learn Faster with Semantic Focus [57.97235825738412]
We study sparse transformers in terms of learnability and generalization.<n>We find that input-dependent sparse attention models appear to converge faster and generalize better than standard attention models.
arXiv Detail & Related papers (2025-06-17T01:19:28Z) - Adaptive Token Boundaries: Integrating Human Chunking Mechanisms into Multimodal LLMs [0.0]
This research presents a systematic investigation into the parallels between human cross-modal chunking mechanisms and token representation methodologies.<n>We propose a novel framework for dynamic cross-modal tokenization that incorporates adaptive boundaries, hierarchical representations, and alignment mechanisms grounded in cognitive science principles.
arXiv Detail & Related papers (2025-05-03T09:14:24Z) - Hallucination Detection in LLMs with Topological Divergence on Attention Graphs [64.74977204942199]
Hallucination, i.e., generating factually incorrect content, remains a critical challenge for large language models.<n>We introduce TOHA, a TOpology-based HAllucination detector in the RAG setting.
arXiv Detail & Related papers (2025-04-14T10:06:27Z) - In-Context Linear Regression Demystified: Training Dynamics and Mechanistic Interpretability of Multi-Head Softmax Attention [52.159541540613915]
We study how multi-head softmax attention models are trained to perform in-context learning on linear data.
Our results reveal that in-context learning ability emerges from the trained transformer as an aggregated effect of its architecture and the underlying data distribution.
arXiv Detail & Related papers (2025-03-17T02:00:49Z) - Will Pre-Training Ever End? A First Step Toward Next-Generation Foundation MLLMs via Self-Improving Systematic Cognition [86.21199607040147]
Self-Improving cognition (SIcog) is a self-learning framework for constructing next-generation foundation language models.
We introduce Chain-of-Description, a step-by-step visual understanding method, and integrate structured chain-of-thought (CoT) reasoning to support in-depth multimodal reasoning.
Extensive experiments demonstrate that SIcog produces next-generation foundation MLLMs with substantially improved multimodal cognition.
arXiv Detail & Related papers (2025-03-16T00:25:13Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - On Understanding Attention-Based In-Context Learning for Categorical Data [49.40350941996942]
We develop a network composed of attention blocks, with each block employing a self-attention layer followed by a cross-attention layer, with associated skip connections.<n>This model can exactly perform multi-step functional GD inference for in-context inference with categorical observations.
arXiv Detail & Related papers (2024-05-27T15:03:21Z) - Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion [2.8948274245812335]
We investigate the implicit regularization of matrix factorization for solving matrix completion problems.
We empirically discover that the connectivity of observed data plays a crucial role in the implicit bias.
Our work reveals the intricate interplay between data connectivity, training dynamics, and implicit regularization in matrix factorization models.
arXiv Detail & Related papers (2024-05-22T15:12:14Z) - Linear Noise Approximation Assisted Bayesian Inference on Mechanistic Model of Partially Observed Stochastic Reaction Network [2.325005809983534]
This paper develops an efficient Bayesian inference approach for partially observed enzymatic reaction network (SRN)
An interpretable linear noise approximation (LNA) metamodel is proposed to approximate the likelihood of observations.
An efficient posterior sampling approach is developed by utilizing the gradients of the derived likelihood to speed up the convergence of Markov Chain Monte Carlo.
arXiv Detail & Related papers (2024-05-05T01:54:21Z) - On the Optimization and Generalization of Multi-head Attention [28.33164313549433]
We investigate the potential optimization and generalization advantages of using multiple attention heads.
We derive convergence and generalization guarantees for gradient-descent training of a single-layer multi-head self-attention model.
arXiv Detail & Related papers (2023-10-19T12:18:24Z) - Unsupervised and supervised learning of interacting topological phases
from single-particle correlation functions [0.0]
We show that unsupervised and supervised machine learning techniques are able to predict phases of a non-exactly solvable model when trained on data of a solvable model.
In particular, we employ a training set made by single-particle correlation functions of a non-interacting quantum wire.
We show that both the principal component analysis and the convolutional neural networks trained on the data of the non-interacting model can identify the topological phases of the interacting model with a high degree of accuracy.
arXiv Detail & Related papers (2022-02-18T16:02:29Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - On the Dynamics of Training Attention Models [30.85940880569692]
We study the dynamics of training a simple attention-based classification model using gradient descent.
We prove that training must converge to attending to the discriminative words when the attention output is classified by a linear classifier.
arXiv Detail & Related papers (2020-11-19T18:55:30Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.