Do LSTMs See Gender? Probing the Ability of LSTMs to Learn Abstract
Syntactic Rules
- URL: http://arxiv.org/abs/2211.00153v1
- Date: Mon, 31 Oct 2022 21:37:12 GMT
- Title: Do LSTMs See Gender? Probing the Ability of LSTMs to Learn Abstract
Syntactic Rules
- Authors: Priyanka Sukumaran, Conor Houghton, Nina Kazanina
- Abstract summary: LSTMs trained on next-word prediction can accurately perform linguistic tasks that require tracking long-distance syntactic dependencies.
Here, we test gender agreement in French which requires tracking both hierarchical syntactic structures and the inherent gender of lexical units.
Our model is able to reliably predict long-distance gender agreement in two subject-predicate contexts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: LSTMs trained on next-word prediction can accurately perform linguistic tasks
that require tracking long-distance syntactic dependencies. Notably, model
accuracy approaches human performance on number agreement tasks (Gulordava et
al., 2018). However, we do not have a mechanistic understanding of how LSTMs
perform such linguistic tasks. Do LSTMs learn abstract grammatical rules, or do
they rely on simple heuristics? Here, we test gender agreement in French which
requires tracking both hierarchical syntactic structures and the inherent
gender of lexical units. Our model is able to reliably predict long-distance
gender agreement in two subject-predicate contexts: noun-adjective and
noun-passive-verb agreement. The model showed more inaccuracies on plural noun
phrases with gender attractors compared to singular cases, suggesting a
reliance on clues from gendered articles for agreement. Overall, our study
highlights key ways in which LSTMs deviate from human behaviour and questions
whether LSTMs genuinely learn abstract syntactic rules and categories. We
propose using gender agreement as a useful probe to investigate the underlying
mechanisms, internal representations, and linguistic capabilities of LSTM
language models.
Related papers
- Large Language Models as Neurolinguistic Subjects: Identifying Internal Representations for Form and Meaning [49.60849499134362]
This study investigates the linguistic understanding of Large Language Models (LLMs) regarding signifier (form) and signified (meaning)
Traditional psycholinguistic evaluations often reflect statistical biases that may misrepresent LLMs' true linguistic capabilities.
We introduce a neurolinguistic approach, utilizing a novel method that combines minimal pair and diagnostic probing to analyze activation patterns across model layers.
arXiv Detail & Related papers (2024-11-12T04:16:44Z) - What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages [51.0349882045866]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender.
We prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender.
We find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability.
arXiv Detail & Related papers (2024-07-12T22:10:16Z) - From 'Showgirls' to 'Performers': Fine-tuning with Gender-inclusive Language for Bias Reduction in LLMs [1.1049608786515839]
We adapt linguistic structures within Large Language Models to promote gender-inclusivity.
The focus of our work is gender-exclusive affixes in English, such as in'show-girl' or'man-cave'
arXiv Detail & Related papers (2024-07-05T11:31:30Z) - Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts [49.97673761305336]
We evaluate three large language models (LLMs) for their alignment with human narrative styles and potential gender biases.
Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases.
arXiv Detail & Related papers (2024-06-27T19:26:11Z) - Investigating grammatical abstraction in language models using few-shot learning of novel noun gender [0.0]
We conduct a noun learning experiment to assess whether an LSTM and a decoder-only transformer can achieve human-like abstraction of grammatical gender in French.
We find that both language models effectively generalise novel noun gender from one to two learning examples and apply the learnt gender across agreement contexts.
While the generalisation behaviour of models suggests that they represent grammatical gender as an abstract category, like humans, further work is needed to explore the details.
arXiv Detail & Related papers (2024-03-15T14:25:59Z) - Evaluating Gender Bias in Large Language Models via Chain-of-Thought
Prompting [87.30837365008931]
Large language models (LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate incremental predictions even on unscalable tasks.
This study examines the impact of LLMs' step-by-step predictions on gender bias in unscalable tasks.
arXiv Detail & Related papers (2024-01-28T06:50:10Z) - Using Artificial French Data to Understand the Emergence of Gender Bias
in Transformer Language Models [5.22145960878624]
This work takes an initial step towards exploring the less researched topic of how neural models discover linguistic properties of words, such as gender, as well as the rules governing their usage.
We propose to use an artificial corpus generated by a PCFG based on French to precisely control the gender distribution in the training data and determine under which conditions a model correctly captures gender information or, on the contrary, appears gender-biased.
arXiv Detail & Related papers (2023-10-24T14:08:37Z) - The Better Your Syntax, the Better Your Semantics? Probing Pretrained
Language Models for the English Comparative Correlative [7.03497683558609]
Construction Grammar (CxG) is a paradigm from cognitive linguistics emphasising the connection between syntax and semantics.
We present an investigation of their capability to classify and understand one of the most commonly studied constructions, the English comparative correlative (CC)
Our results show that all three investigated PLMs are able to recognise the structure of the CC but fail to use its meaning.
arXiv Detail & Related papers (2022-10-24T13:01:24Z) - Analyzing Gender Representation in Multilingual Models [59.21915055702203]
We focus on the representation of gender distinctions as a practical case study.
We examine the extent to which the gender concept is encoded in shared subspaces across different languages.
arXiv Detail & Related papers (2022-04-20T00:13:01Z) - Towards Language Modelling in the Speech Domain Using Sub-word
Linguistic Units [56.52704348773307]
We propose a novel LSTM-based generative speech LM based on linguistic units including syllables and phonemes.
With a limited dataset, orders of magnitude smaller than that required by contemporary generative models, our model closely approximates babbling speech.
We show the effect of training with auxiliary text LMs, multitask learning objectives, and auxiliary articulatory features.
arXiv Detail & Related papers (2021-10-31T22:48:30Z) - LSTMs Compose (and Learn) Bottom-Up [18.34617849764921]
Recent work in NLP shows that LSTM language models capture hierarchical structure in language data.
In contrast to existing work, we consider the textitlearning process that leads to their compositional behavior.
We present a related measure of Decompositional Interdependence between word meanings in an LSTM, based on their gate interactions.
arXiv Detail & Related papers (2020-10-06T13:00:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.