Verb Conjugation in Transformers Is Determined by Linear Encodings of
Subject Number
- URL: http://arxiv.org/abs/2310.15151v1
- Date: Mon, 23 Oct 2023 17:53:47 GMT
- Title: Verb Conjugation in Transformers Is Determined by Linear Encodings of
Subject Number
- Authors: Sophie Hao, Tal Linzen
- Abstract summary: We show that BERT's ability to conjugate verbs relies on a linear encoding of subject number.
This encoding is found in the subject position at the first layer and the verb position at the last layer, but distributed across positions at middle layers.
- Score: 24.248659219487976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep architectures such as Transformers are sometimes criticized for having
uninterpretable "black-box" representations. We use causal intervention
analysis to show that, in fact, some linguistic features are represented in a
linear, interpretable format. Specifically, we show that BERT's ability to
conjugate verbs relies on a linear encoding of subject number that can be
manipulated with predictable effects on conjugation accuracy. This encoding is
found in the subject position at the first layer and the verb position at the
last layer, but distributed across positions at middle layers, particularly
when there are multiple cues to subject number.
Related papers
- Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding [32.01426831450348]
We show that causal masking and a starting token enable Transformers to compute positional information and depth within hierarchical structures.
We demonstrate that Transformers without positional encoding can generate hierarchical languages.
arXiv Detail & Related papers (2024-10-16T09:56:01Z) - Transformers need glasses! Information over-squashing in language tasks [18.81066657470662]
We study how information propagates in decoder-only Transformers.
We show that certain sequences of inputs to the Transformer can yield arbitrarily close representations in the final token.
We also show that decoder-only Transformer language models can lose sensitivity to specific tokens in the input.
arXiv Detail & Related papers (2024-06-06T17:14:44Z) - Disentangling continuous and discrete linguistic signals in
transformer-based sentence embeddings [1.8927791081850118]
We explore whether we can compress transformer-based sentence embeddings into a representation that separates different linguistic signals.
We show that by compressing an input sequence that shares a targeted phenomenon into the latent layer of a variational autoencoder-like system, the targeted linguistic information becomes more explicit.
arXiv Detail & Related papers (2023-12-18T15:16:54Z) - Word Order Matters when you Increase Masking [70.29624135819884]
We study the effect of removing position encodings on the pre-training objective itself, to test whether models can reconstruct position information from co-occurrences alone.
We find that the necessity of position information increases with the amount of masking, and that masked language models without position encodings are not able to reconstruct this information on the task.
arXiv Detail & Related papers (2022-11-08T18:14:04Z) - Multilingual Extraction and Categorization of Lexical Collocations with
Graph-aware Transformers [86.64972552583941]
We put forward a sequence tagging BERT-based model enhanced with a graph-aware transformer architecture, which we evaluate on the task of collocation recognition in context.
Our results suggest that explicitly encoding syntactic dependencies in the model architecture is helpful, and provide insights on differences in collocation typification in English, Spanish and French.
arXiv Detail & Related papers (2022-05-23T16:47:37Z) - LAVT: Language-Aware Vision Transformer for Referring Image Segmentation [80.54244087314025]
We show that better cross-modal alignments can be achieved through the early fusion of linguistic and visual features in vision Transformer encoder network.
Our method surpasses the previous state-of-the-art methods on RefCOCO, RefCO+, and G-Ref by large margins.
arXiv Detail & Related papers (2021-12-04T04:53:35Z) - The Impact of Positional Encodings on Multilingual Compression [3.454503173118508]
Several modifications have been proposed over the sinusoidal positional encodings used in the original transformer architecture.
We first show that surprisingly, while these modifications tend to improve monolingual language models, none of them result in better multilingual language models.
arXiv Detail & Related papers (2021-09-11T23:22:50Z) - Disentangling Representations of Text by Masking Transformers [27.6903196190087]
We learn binary masks over transformer weights or hidden units to uncover subsets of features that correlate with a specific factor of variation.
We evaluate this method with respect to its ability to disentangle representations of sentiment from genre in movie reviews, "toxicity" from dialect in Tweets, and syntax from semantics.
arXiv Detail & Related papers (2021-04-14T22:45:34Z) - Cross-Thought for Sentence Encoder Pre-training [89.32270059777025]
Cross-Thought is a novel approach to pre-training sequence encoder.
We train a Transformer-based sequence encoder over a large set of short sequences.
Experiments on question answering and textual entailment tasks demonstrate that our pre-trained encoder can outperform state-of-the-art encoders.
arXiv Detail & Related papers (2020-10-07T21:02:41Z) - Rethinking Positional Encoding in Language Pre-training [111.2320727291926]
We show that in absolute positional encoding, the addition operation applied on positional embeddings and word embeddings brings mixed correlations.
We propose a new positional encoding method called textbfTransformer with textbfUntied textPositional textbfEncoding (T)
arXiv Detail & Related papers (2020-06-28T13:11:02Z) - Neural Syntactic Preordering for Controlled Paraphrase Generation [57.5316011554622]
Our work uses syntactic transformations to softly "reorder'' the source sentence and guide our neural paraphrasing model.
First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model.
Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order.
arXiv Detail & Related papers (2020-05-05T09:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.