Did the Models Understand Documents? Benchmarking Models for Language
Understanding in Document-Level Relation Extraction
- URL: http://arxiv.org/abs/2306.11386v1
- Date: Tue, 20 Jun 2023 08:52:05 GMT
- Title: Did the Models Understand Documents? Benchmarking Models for Language
Understanding in Document-Level Relation Extraction
- Authors: Haotian Chen, Bingsheng Chen, Xiangdong Zhou
- Abstract summary: Document-level relation extraction (DocRE) attracts more research interest recently.
While models achieve consistent performance gains in DocRE, their underlying decision rules are still understudied.
In this paper, we take the first step toward answering this question and then introduce a new perspective on comprehensively evaluating a model.
- Score: 2.4665182280122577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Document-level relation extraction (DocRE) attracts more research interest
recently. While models achieve consistent performance gains in DocRE, their
underlying decision rules are still understudied: Do they make the right
predictions according to rationales? In this paper, we take the first step
toward answering this question and then introduce a new perspective on
comprehensively evaluating a model. Specifically, we first conduct annotations
to provide the rationales considered by humans in DocRE. Then, we conduct
investigations and reveal the fact that: In contrast to humans, the
representative state-of-the-art (SOTA) models in DocRE exhibit different
decision rules. Through our proposed RE-specific attacks, we next demonstrate
that the significant discrepancy in decision rules between models and humans
severely damages the robustness of models and renders them inapplicable to
real-world RE scenarios. After that, we introduce mean average precision (MAP)
to evaluate the understanding and reasoning capabilities of models. According
to the extensive experimental results, we finally appeal to future work to
consider evaluating both performance and the understanding ability of models
for the development of their applications. We make our annotations and code
publicly available.
Related papers
- RewardBench: Evaluating Reward Models for Language Modeling [100.28366840977966]
We present RewardBench, a benchmark dataset and code-base for evaluation of reward models.
The dataset is a collection of prompt-chosen-rejected trios spanning chat, reasoning, and safety.
On the RewardBench leaderboard, we evaluate reward models trained with a variety of methods.
arXiv Detail & Related papers (2024-03-20T17:49:54Z) - Explaining Pre-Trained Language Models with Attribution Scores: An
Analysis in Low-Resource Settings [32.03184402316848]
We analyze attribution scores extracted from prompt-based models w.r.t. plausibility and faithfulness.
We find that using the prompting paradigm yields more plausible explanations than fine-tuning the models in low-resource settings.
arXiv Detail & Related papers (2024-03-08T14:14:37Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement [3.537369004801589]
We study the classification of legal reasoning according to jurisprudential philosophy.
We use a novel dataset of historical United States Supreme Court opinions annotated by a team of domain experts.
We find that generative models perform poorly when given instructions equal to the instructions presented to human annotators.
arXiv Detail & Related papers (2023-10-27T19:27:59Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - To what extent do human explanations of model behavior align with actual
model behavior? [91.67905128825402]
We investigated the extent to which human-generated explanations of models' inference decisions align with how models actually make these decisions.
We defined two alignment metrics that quantify how well natural language human explanations align with model sensitivity to input words.
We find that a model's alignment with human explanations is not predicted by the model's accuracy on NLI.
arXiv Detail & Related papers (2020-12-24T17:40:06Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.