ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning
- URL: http://arxiv.org/abs/2002.04326v3
- Date: Sat, 22 Aug 2020 07:14:30 GMT
- Title: ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning
- Authors: Weihao Yu, Zihang Jiang, Yanfei Dong, Jiashi Feng
- Abstract summary: We introduce a new Reading dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations.
In this paper, we propose to identify biased data points and separate them into EASY set and the rest as HARD set.
Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set.
However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.
- Score: 85.33459673197149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent powerful pre-trained language models have achieved remarkable
performance on most of the popular datasets for reading comprehension. It is
time to introduce more challenging datasets to push the development of this
field towards more comprehensive reasoning of text. In this paper, we introduce
a new Reading Comprehension dataset requiring logical reasoning (ReClor)
extracted from standardized graduate admission examinations. As earlier studies
suggest, human-annotated datasets usually contain biases, which are often
exploited by models to achieve high accuracy without truly understanding the
text. In order to comprehensively evaluate the logical reasoning ability of
models on ReClor, we propose to identify biased data points and separate them
into EASY set while the rest as HARD set. Empirical results show that
state-of-the-art models have an outstanding ability to capture biases contained
in the dataset with high accuracy on EASY set. However, they struggle on HARD
set with poor performance near that of random guess, indicating more research
is needed to essentially enhance the logical reasoning ability of current
models.
Related papers
- Numerical Literals in Link Prediction: A Critical Examination of Models and Datasets [2.5999037208435705]
Link Prediction models that incorporate numerical literals have shown minor improvements on existing benchmark datasets.
It is unclear whether a model is actually better in using numerical literals, or better capable of utilizing the graph structure.
We propose a methodology to evaluate LP models that incorporate numerical literals.
arXiv Detail & Related papers (2024-07-25T17:55:33Z) - Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data [89.2410799619405]
We introduce the Quantitative Reasoning with Data benchmark to evaluate Large Language Models' capability in statistical and causal reasoning with real-world data.
The benchmark comprises a dataset of 411 questions accompanied by data sheets from textbooks, online learning materials, and academic papers.
To compare models' quantitative reasoning abilities on data and text, we enrich the benchmark with an auxiliary set of 290 text-only questions, namely QRText.
arXiv Detail & Related papers (2024-02-27T16:15:03Z) - Fighting Bias with Bias: Promoting Model Robustness by Amplifying
Dataset Biases [5.997909991352044]
Recent work sought to develop robust, unbiased models by filtering biased examples from training sets.
We argue that such filtering can obscure the true capabilities of models to overcome biases.
We introduce an evaluation framework defined by a bias-amplified training set and an anti-biased test set.
arXiv Detail & Related papers (2023-05-30T10:10:42Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - A Large Scale Search Dataset for Unbiased Learning to Rank [51.97967284268577]
We introduce the Baidu-ULTR dataset for unbiased learning to rank.
It involves randomly sampled 1.2 billion searching sessions and 7,008 expert annotated queries.
It provides: (1) the original semantic feature and a pre-trained language model for easy usage; (2) sufficient display information such as position, displayed height, and displayed abstract; and (3) rich user feedback on search result pages (SERPs) like dwelling time.
arXiv Detail & Related papers (2022-07-07T02:37:25Z) - Interpretable Research Replication Prediction via Variational Contextual
Consistency Sentence Masking [14.50690911709558]
Research Replication Prediction (RRP) is the task of predicting whether a published research result can be replicated or not.
In this work, we propose the Variational Contextual Consistency Sentence Masking (VCCSM) method to automatically extract key sentences.
Results of our experiments on RRP along with European Convention of Human Rights (ECHR) datasets demonstrate that VCCSM is able to improve the model interpretability for the long document classification tasks.
arXiv Detail & Related papers (2022-03-28T03:27:13Z) - Annotating and Modeling Fine-grained Factuality in Summarization [36.88018450067003]
A major barrier to their use in practice is their propensity to output summaries that are not faithful to the input and that contain factual errors.
We explore both synthetic and human-labeled data sources for training models to identify factual errors in summarization.
We show that our best factuality detection model enables training of more factual XSum summarization models by allowing us to identify non-factual tokens in the training data.
arXiv Detail & Related papers (2021-04-09T11:20:44Z) - Improving Commonsense Causal Reasoning by Adversarial Training and Data
Augmentation [14.92157586545743]
This paper presents a number of techniques for making models more robust in the domain of causal reasoning.
We show a statistically significant improvement on performance and on both datasets, even with only a small number of additionally generated data points.
arXiv Detail & Related papers (2021-01-13T09:55:29Z) - Improving Robustness by Augmenting Training Sentences with
Predicate-Argument Structures [62.562760228942054]
Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective.
We propose to augment the input sentences in the training data with their corresponding predicate-argument structures.
We show that without targeting a specific bias, our sentence augmentation improves the robustness of transformer models against multiple biases.
arXiv Detail & Related papers (2020-10-23T16:22:05Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.