QLENS: Towards A Quantum Perspective of Language Transformers
- URL: http://arxiv.org/abs/2510.11963v1
- Date: Mon, 13 Oct 2025 21:53:05 GMT
- Title: QLENS: Towards A Quantum Perspective of Language Transformers
- Authors: Aditya Gupta, Kirandeep Kaur, Vinayak Gupta,
- Abstract summary: We present QLENS, a novel attempt to develop a physics-based perspective on the Transformer generation process.<n>Under QLENS, a Transformer is studied by converting its latent activations into a state vector in a Hilbert space derived from the model's output units.<n>The model's final probability distribution is obtained by applying the Born rule to the end state using a specific measurement operator.
- Score: 14.34282709638159
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In natural language processing, current methods for understanding Transformers are successful at identifying intermediate predictions during a model's inference. However, these approaches function as limited diagnostic checkpoints, lacking a mathematical framework for mechanistically modeling how each layer facilitates transitions between these evolving states. This interpretability gap and past successes of interdisciplinary outlooks inspire us to turn to physics in search of a descriptive mathematical framework for Transformers. We observe that language models are intrinsically probabilistic, an attribute that is echoed in the core postulates of quantum mechanics. This parallel inspires us to translate insights from this discipline to that of natural language processing. Towards this objective, we propose QLENS a novel attempt to develop a physics-based perspective on the Transformer generation process. Under QLENS, a Transformer is studied by converting its latent activations into a state vector in a Hilbert space derived from the model's output units. This state subsequently evolves through hidden layers - reformulated as unitary operators and analogously defined Hamiltonians - during inference. The model's final probability distribution is obtained by applying the Born rule to the end state using a specific measurement operator. To demonstrate QLENS's potential, we conduct a proof-of-concept by probing a toy Transformer to investigate the influence of individual layers in a model's prediction trajectory. We present our work as a foundation for cross-domain insights to be leveraged towards a broader understanding of Transformers.
Related papers
- Selective Induction Heads: How Transformers Select Causal Structures In Context [50.09964990342878]
We introduce a novel framework that showcases transformers' ability to handle causal structures.<n>Our framework varies the causal structure through interleaved Markov chains with different lags while keeping the transition probabilities fixed.<n>This setting unveils the formation of Selective Induction Heads, a new circuit that endows transformers with the ability to select the correct causal structure in-context.
arXiv Detail & Related papers (2025-09-09T23:13:41Z) - A Free Probabilistic Framework for Analyzing the Transformer-based Language Models [19.78896931593813]
We present a formal operator-theoretic framework for analyzing Transformer-based language models using free probability theory.<n>This work offers a principled, though theoretical, perspective on structural dynamics in large language models.
arXiv Detail & Related papers (2025-06-19T19:13:02Z) - Characterizing the Expressivity of Transformer Language Models [56.598551673153366]
We provide an exact characterization of fixed-precision transformers with strict future masking and soft attention.<n>We show that these models are precisely as expressive as a specific fragment of linear temporal logic.<n>We further relate this logic to established classes in formal language theory, automata theory, and algebra.
arXiv Detail & Related papers (2025-05-29T16:30:30Z) - Moving Beyond Next-Token Prediction: Transformers are Context-Sensitive Language Generators [0.40792653193642503]
Large Language Models (LLMs) powered by Transformers have demonstrated human-like intelligence capabilities.<n>This paper presents a novel framework for interpreting LLMs as probabilistic left context-sensitive languages (CSLs) generators.
arXiv Detail & Related papers (2025-04-15T04:06:27Z) - Physics-informed Transformers for Electronic Quantum States [0.0]
We propose a modified variational Monte-Carlo framework for neural quantum-state representations.<n>A Transformer is used to parametrize and autoregressively sample the corrections to the reference state.<n>This work paves the way for more efficient and interpretable neural quantum-state representations.
arXiv Detail & Related papers (2024-12-16T19:00:00Z) - Interpreting Affine Recurrence Learning in GPT-style Transformers [54.01174470722201]
In-context learning allows GPT-style transformers to generalize during inference without modifying their weights.
This paper focuses specifically on their ability to learn and predict affine recurrences as an ICL task.
We analyze the model's internal operations using both empirical and theoretical approaches.
arXiv Detail & Related papers (2024-10-22T21:30:01Z) - Universal Approximation of Mean-Field Models via Transformers [3.8916312075738273]
This paper investigates the use of transformers to approximate the mean-field dynamics of interacting particle systems exhibiting collective behavior.<n>Such systems are fundamental in modeling phenomena across physics, biology, and engineering, including opinion formation, biological networks, and swarm robotics.
arXiv Detail & Related papers (2024-10-06T19:47:24Z) - Clustering in pure-attention hardmax transformers and its role in sentiment analysis [0.0]
We rigorously characterize the behavior of transformers with hardmax self-attention and normalization sublayers as the number of layers tends to infinity.
We show that the transformer inputsally converge to a clustered equilibrium determined by special points called leaders.
We then leverage this theoretical understanding to solve sentiment analysis problems from language processing using a fully interpretable transformer model.
arXiv Detail & Related papers (2024-06-26T16:13:35Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Probabilistic Transformer: A Probabilistic Dependency Model for
Contextual Word Representation [52.270712965271656]
We propose a new model of contextual word representation, not from a neural perspective, but from a purely syntactic and probabilistic perspective.
We find that the graph of our model resembles transformers, with correspondences between dependencies and self-attention.
Experiments show that our model performs competitively to transformers on small to medium sized datasets.
arXiv Detail & Related papers (2023-11-26T06:56:02Z) - Transformers from an Optimization Perspective [24.78739299952529]
We study the problem of finding an energy function underlying the Transformer model.
By finding such a function, we can reinterpret Transformers as the unfolding of an interpretable optimization process.
This work contributes to our intuition and understanding of Transformers, while potentially laying the ground-work for new model designs.
arXiv Detail & Related papers (2022-05-27T10:45:15Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.