Machine Reading, Fast and Slow: When Do Models "Understand" Language?
- URL: http://arxiv.org/abs/2209.07430v1
- Date: Thu, 15 Sep 2022 16:25:44 GMT
- Title: Machine Reading, Fast and Slow: When Do Models "Understand" Language?
- Authors: Sagnik Ray Choudhury, Anna Rogers, Isabelle Augenstein
- Abstract summary: We investigate the behavior of reading comprehension models with respect to two linguistic'skills': coreference resolution and comparison.
We find that for comparison (but not coreference) the systems based on larger encoders are more likely to rely on the 'right' information.
- Score: 59.897515617661874
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Two of the most fundamental challenges in Natural Language Understanding
(NLU) at present are: (a) how to establish whether deep learning-based models
score highly on NLU benchmarks for the 'right' reasons; and (b) to understand
what those reasons would even be. We investigate the behavior of reading
comprehension models with respect to two linguistic 'skills': coreference
resolution and comparison. We propose a definition for the reasoning steps
expected from a system that would be 'reading slowly', and compare that with
the behavior of five models of the BERT family of various sizes, observed
through saliency scores and counterfactual explanations. We find that for
comparison (but not coreference) the systems based on larger encoders are more
likely to rely on the 'right' information, but even they struggle with
generalization, suggesting that they still learn specific lexical patterns
rather than the general principles of comparison.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with
Small Language Models [25.628569338856934]
We are the first to unravel the cognitive reasoning abilities of language models.
Based on the dual process theory in cognitive science, we are the first to unravel the cognitive reasoning abilities of language models.
arXiv Detail & Related papers (2023-11-12T06:56:21Z) - STREET: A Multi-Task Structured Reasoning and Explanation Benchmark [56.555662318619135]
We introduce a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
We expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer.
arXiv Detail & Related papers (2023-02-13T22:34:02Z) - APOLLO: A Simple Approach for Adaptive Pretraining of Language Models
for Logical Reasoning [73.3035118224719]
We propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities.
APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
arXiv Detail & Related papers (2022-12-19T07:40:02Z) - ALERT: Adapting Language Models to Reasoning Tasks [43.8679673685468]
ALERT is a benchmark and suite of analyses for assessing language models' reasoning ability.
ALERT provides a test bed to asses any language model on fine-grained reasoning skills.
We find that language models learn more reasoning skills during finetuning stage compared to pretraining state.
arXiv Detail & Related papers (2022-12-16T05:15:41Z) - The Goldilocks of Pragmatic Understanding: Fine-Tuning Strategy Matters
for Implicature Resolution by LLMs [26.118193748582197]
We evaluate four categories of widely used state-of-the-art models.
We find that, despite only evaluating on utterances that require a binary inference, models in three of these categories perform close to random.
These results suggest that certain fine-tuning strategies are far better at inducing pragmatic understanding in models.
arXiv Detail & Related papers (2022-10-26T19:04:23Z) - Structured, flexible, and robust: benchmarking and improving large
language models towards more human-like behavior in out-of-distribution
reasoning tasks [39.39138995087475]
We ask how much of human-like thinking can be captured by learning statistical patterns in language alone.
Our benchmark contains two problem-solving domains (planning and explanation generation) and is designed to require generalization.
We find that humans are far more robust than LLMs on this benchmark.
arXiv Detail & Related papers (2022-05-11T18:14:33Z) - Interpreting Language Models with Contrastive Explanations [99.7035899290924]
Language models must consider various features to predict a token, such as its part of speech, number, tense, or semantics.
Existing explanation methods conflate evidence for all these features into a single explanation, which is less interpretable for human understanding.
We show that contrastive explanations are quantifiably better than non-contrastive explanations in verifying major grammatical phenomena.
arXiv Detail & Related papers (2022-02-21T18:32:24Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.