What Causes the Failure of Explicit to Implicit Discourse Relation Recognition?
- URL: http://arxiv.org/abs/2404.00999v1
- Date: Mon, 1 Apr 2024 09:08:53 GMT
- Title: What Causes the Failure of Explicit to Implicit Discourse Relation Recognition?
- Authors: Wei Liu, Stephen Wan, Michael Strube,
- Abstract summary: We show that one cause for such failure is a label shift after connectives are eliminated.
We find that the discourse relations expressed by some explicit instances will change when connectives disappear.
We investigate two strategies to mitigate the label shift: filtering out noisy data and joint learning with connectives.
- Score: 14.021169977926265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider an unanswered question in the discourse processing community: why do relation classifiers trained on explicit examples (with connectives removed) perform poorly in real implicit scenarios? Prior work claimed this is due to linguistic dissimilarity between explicit and implicit examples but provided no empirical evidence. In this study, we show that one cause for such failure is a label shift after connectives are eliminated. Specifically, we find that the discourse relations expressed by some explicit instances will change when connectives disappear. Unlike previous work manually analyzing a few examples, we present empirical evidence at the corpus level to prove the existence of such shift. Then, we analyze why label shift occurs by considering factors such as the syntactic role played by connectives, ambiguity of connectives, and more. Finally, we investigate two strategies to mitigate the label shift: filtering out noisy data and joint learning with connectives. Experiments on PDTB 2.0, PDTB 3.0, and the GUM dataset demonstrate that classifiers trained with our strategies outperform strong baselines.
Related papers
- Wide Two-Layer Networks can Learn from Adversarial Perturbations [27.368408524000778]
We theoretically explain the counterintuitive success of perturbation learning.
We prove that adversarial perturbations contain sufficient class-specific features for networks to generalize from them.
arXiv Detail & Related papers (2024-10-31T06:55:57Z) - Uncovering Autoregressive LLM Knowledge of Thematic Fit in Event Representation [0.09558392439655014]
We assess whether pre-trained autoregressive LLMs possess consistent, expressible knowledge about thematic fit.
We evaluate both closed and open state-of-the-art LLMs on several psycholinguistic datasets.
Our results show that chain-of-thought reasoning is more effective on datasets with self-explanatory semantic role labels.
arXiv Detail & Related papers (2024-10-19T18:25:30Z) - Multi-Label Classification for Implicit Discourse Relation Recognition [10.280148603465697]
We explore various multi-label classification frameworks to handle implicit discourse relation recognition.
We show that multi-label classification methods don't depress performance for single-label prediction.
arXiv Detail & Related papers (2024-06-06T19:37:25Z) - Understanding and Mitigating Spurious Correlations in Text
Classification with Neighborhood Analysis [69.07674653828565]
Machine learning models have a tendency to leverage spurious correlations that exist in the training set but may not hold true in general circumstances.
In this paper, we examine the implications of spurious correlations through a novel perspective called neighborhood analysis.
We propose a family of regularization methods, NFL (doN't Forget your Language) to mitigate spurious correlations in text classification.
arXiv Detail & Related papers (2023-05-23T03:55:50Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Counterfactual Invariance to Spurious Correlations: Why and How to Pass
Stress Tests [87.60900567941428]
A spurious correlation' is the dependence of a model on some aspect of the input data that an analyst thinks shouldn't matter.
In machine learning, these have a know-it-when-you-see-it character.
We study stress testing using the tools of causal inference.
arXiv Detail & Related papers (2021-05-31T14:39:38Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Geometry matters: Exploring language examples at the decision boundary [2.7249290070320034]
BERT, CNN and fasttext are susceptible to word substitutions in high difficulty examples.
On YelpReviewPolarity we observe a correlation coefficient of -0.4 between resilience to perturbations and the difficulty score.
Our approach is simple, architecture agnostic and can be used to study the fragilities of text classification models.
arXiv Detail & Related papers (2020-10-14T16:26:13Z) - The Extraordinary Failure of Complement Coercion Crowdsourcing [50.599433903377374]
Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years.
We aim to collect annotated data for this phenomenon by reducing it to either of two known tasks: Explicit Completion and Natural Language Inference.
In both cases, crowdsourcing resulted in low agreement scores, even though we followed the same methodologies as in previous work.
arXiv Detail & Related papers (2020-10-12T19:04:04Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.