The Sensitivity of Language Models and Humans to Winograd Schema
Perturbations
- URL: http://arxiv.org/abs/2005.01348v2
- Date: Thu, 7 May 2020 06:48:57 GMT
- Title: The Sensitivity of Language Models and Humans to Winograd Schema
Perturbations
- Authors: Mostafa Abdou, Vinit Ravishankar, Maria Barrett, Yonatan Belinkov,
Desmond Elliott, Anders S{\o}gaard
- Abstract summary: We show that large-scale pretrained language models are sensitive to linguistic perturbations that minimally affect human understanding.
Our results highlight interesting differences between humans and language models.
- Score: 36.47219885590433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale pretrained language models are the major driving force behind
recent improvements in performance on the Winograd Schema Challenge, a widely
employed test of common sense reasoning ability. We show, however, with a new
diagnostic dataset, that these models are sensitive to linguistic perturbations
of the Winograd examples that minimally affect human understanding. Our results
highlight interesting differences between humans and language models: language
models are more sensitive to number or gender alternations and synonym
replacements than humans, and humans are more stable and consistent in their
predictions, maintain a much higher absolute performance, and perform better on
non-associative instances than associative ones. Overall, humans are correct
more often than out-of-the-box models, and the models are sometimes right for
the wrong reasons. Finally, we show that fine-tuning on a large, task-specific
dataset can offer a solution to these issues.
Related papers
- DevBench: A multimodal developmental benchmark for language learning [0.34129029452670606]
We introduce DevBench, a benchmark for evaluating vision-language models on tasks and behavioral data.
We show that DevBench provides a benchmark for comparing models to human language development.
These comparisons highlight ways in which model and human language learning processes diverge.
arXiv Detail & Related papers (2024-06-14T17:49:41Z) - Turning large language models into cognitive models [0.0]
We show that large language models can be turned into cognitive models.
These models offer accurate representations of human behavior, even outperforming traditional cognitive models in two decision-making domains.
Taken together, these results suggest that large, pre-trained models can be adapted to become generalist cognitive models.
arXiv Detail & Related papers (2023-06-06T18:00:01Z) - Towards preserving word order importance through Forced Invalidation [80.33036864442182]
We show that pre-trained language models are insensitive to word order.
We propose Forced Invalidation to help preserve the importance of word order.
Our experiments demonstrate that Forced Invalidation significantly improves the sensitivity of the models to word order.
arXiv Detail & Related papers (2023-04-11T13:42:10Z) - Chain of Hindsight Aligns Language Models with Feedback [62.68665658130472]
We propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity.
We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model.
By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors.
arXiv Detail & Related papers (2023-02-06T10:28:16Z) - Rarely a problem? Language models exhibit inverse scaling in their
predictions following few-type quantifiers [0.6091702876917281]
We focus on 'few'-type quantifiers, as in 'few children like toys', which might pose a particular challenge for language models.
We present 960 English sentence stimuli from two human neurolinguistic experiments to 22 autoregressive transformer models of differing sizes.
arXiv Detail & Related papers (2022-12-16T20:01:22Z) - A fine-grained comparison of pragmatic language understanding in humans
and language models [2.231167375820083]
We compare language models and humans on seven pragmatic phenomena.
We find that the largest models achieve high accuracy and match human error patterns.
Preliminary evidence that models and humans are sensitive to similar linguistic cues.
arXiv Detail & Related papers (2022-12-13T18:34:59Z) - Scaling Language Models: Methods, Analysis & Insights from Training
Gopher [83.98181046650664]
We present an analysis of Transformer-based language model performance across a wide range of model scales.
Gains from scale are largest in areas such as reading comprehension, fact-checking, and the identification of toxic language.
We discuss the application of language models to AI safety and the mitigation of downstream harms.
arXiv Detail & Related papers (2021-12-08T19:41:47Z) - A Targeted Assessment of Incremental Processing in Neural LanguageModels
and Humans [2.7624021966289605]
We present a scaled-up comparison of incremental processing in humans and neural language models.
Data comes from a novel online experimental paradigm called the Interpolated Maze task.
We find that both humans and language models show increased processing difficulty in ungrammatical sentence regions.
arXiv Detail & Related papers (2021-06-06T20:04:39Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.