Semantics Altering Modifications for Evaluating Comprehension in Machine
Reading
- URL: http://arxiv.org/abs/2012.04056v1
- Date: Mon, 7 Dec 2020 21:00:42 GMT
- Title: Semantics Altering Modifications for Evaluating Comprehension in Machine
Reading
- Authors: Viktor Schlegel, Goran Nenadic, Riza Batista-Navarro
- Abstract summary: We investigate whether machine reading comprehension models are able to correctly process Semantics Altering Modifications.
We present a method to automatically generate and align challenge sets featuring original and altered examples.
We apply the methodology in order to evaluate MRC models with regard to their capability to correctly process SAM-enriched data.
- Score: 1.1355639618103164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in NLP have yielded impressive results for the task of machine
reading comprehension (MRC), with approaches having been reported to achieve
performance comparable to that of humans. In this paper, we investigate whether
state-of-the-art MRC models are able to correctly process Semantics Altering
Modifications (SAM): linguistically-motivated phenomena that alter the
semantics of a sentence while preserving most of its lexical surface form. We
present a method to automatically generate and align challenge sets featuring
original and altered examples. We further propose a novel evaluation
methodology to correctly assess the capability of MRC systems to process these
examples independent of the data they were optimised on, by discounting for
effects introduced by domain shift. In a large-scale empirical study, we apply
the methodology in order to evaluate extractive MRC models with regard to their
capability to correctly process SAM-enriched data. We comprehensively cover 12
different state-of-the-art neural architecture configurations and four training
datasets and find that -- despite their well-known remarkable performance --
optimised models consistently struggle to correctly process semantically
altered data.
Related papers
- Value Alignment from Unstructured Text [32.9140028463247]
We introduce a systematic end-to-end methodology for aligning large language models (LLMs) to the implicit and explicit values represented in unstructured text data.
Our proposed approach leverages the use of scalable synthetic data generation techniques to effectively align the model to the values present in the unstructured data.
Our approach credibly aligns LLMs to the values embedded within documents, and shows improved performance against other approaches.
arXiv Detail & Related papers (2024-08-19T20:22:08Z) - Enhancing Retrieval-Augmented LMs with a Two-stage Consistency Learning Compressor [4.35807211471107]
This work proposes a novel two-stage consistency learning approach for retrieved information compression in retrieval-augmented language models.
The proposed method is empirically validated across multiple datasets, demonstrating notable enhancements in precision and efficiency for question-answering tasks.
arXiv Detail & Related papers (2024-06-04T12:43:23Z) - Counterfactual Fairness through Transforming Data Orthogonal to Bias [7.109458605736819]
We propose a novel data pre-processing algorithm, Orthogonal to Bias (OB)
OB is designed to eliminate the influence of a group of continuous sensitive variables, thus promoting counterfactual fairness in machine learning applications.
OB is model-agnostic, making it applicable to a wide range of machine learning models and tasks.
arXiv Detail & Related papers (2024-03-26T16:40:08Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Discover, Explanation, Improvement: An Automatic Slice Detection
Framework for Natural Language Processing [72.14557106085284]
slice detection models (SDM) automatically identify underperforming groups of datapoints.
This paper proposes a benchmark named "Discover, Explain, improve (DEIM)" for classification NLP tasks.
Our evaluation shows that Edisa can accurately select error-prone datapoints with informative semantic features.
arXiv Detail & Related papers (2022-11-08T19:00:00Z) - Improving Meta-learning for Low-resource Text Classification and
Generation via Memory Imitation [87.98063273826702]
We propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.
A theoretical analysis is provided to prove the effectiveness of our method.
arXiv Detail & Related papers (2022-03-22T12:41:55Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - Ensemble Learning-Based Approach for Improving Generalization Capability
of Machine Reading Comprehension Systems [0.7614628596146599]
Machine Reading (MRC) is an active field in natural language processing with many successful developed models in recent years.
Despite their high in-distribution accuracy, these models suffer from two issues: high training cost and low out-of-distribution accuracy.
In this paper, we investigate the effect of ensemble learning approach to improve generalization of MRC systems without retraining a big model.
arXiv Detail & Related papers (2021-07-01T11:11:17Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.