Multi-Label Classification for Implicit Discourse Relation Recognition
- URL: http://arxiv.org/abs/2406.04461v1
- Date: Thu, 6 Jun 2024 19:37:25 GMT
- Title: Multi-Label Classification for Implicit Discourse Relation Recognition
- Authors: Wanqiu Long, N. Siddharth, Bonnie Webber,
- Abstract summary: We explore various multi-label classification frameworks to handle implicit discourse relation recognition.
We show that multi-label classification methods don't depress performance for single-label prediction.
- Score: 10.280148603465697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discourse relations play a pivotal role in establishing coherence within textual content, uniting sentences and clauses into a cohesive narrative. The Penn Discourse Treebank (PDTB) stands as one of the most extensively utilized datasets in this domain. In PDTB-3, the annotators can assign multiple labels to an example, when they believe that multiple relations are present. Prior research in discourse relation recognition has treated these instances as separate examples during training, and only one example needs to have its label predicted correctly for the instance to be judged as correct. However, this approach is inadequate, as it fails to account for the interdependence of labels in real-world contexts and to distinguish between cases where only one sense relation holds and cases where multiple relations hold simultaneously. In our work, we address this challenge by exploring various multi-label classification frameworks to handle implicit discourse relation recognition. We show that multi-label classification methods don't depress performance for single-label prediction. Additionally, we give comprehensive analysis of results and data. Our work contributes to advancing the understanding and application of discourse relations and provide a foundation for the future study
Related papers
- What Causes the Failure of Explicit to Implicit Discourse Relation Recognition? [14.021169977926265]
We show that one cause for such failure is a label shift after connectives are eliminated.
We find that the discourse relations expressed by some explicit instances will change when connectives disappear.
We investigate two strategies to mitigate the label shift: filtering out noisy data and joint learning with connectives.
arXiv Detail & Related papers (2024-04-01T09:08:53Z) - Automatic Alignment of Discourse Relations of Different Discourse Annotation Frameworks [5.439020425819001]
We introduce a fully automatic approach to learn label embeddings during a classification task.
These embeddings are then utilized to map discourse relations from different frameworks.
arXiv Detail & Related papers (2024-03-29T14:18:26Z) - Multiple Relations Classification using Imbalanced Predictions
Adaptation [0.0]
The relation classification task assigns the proper semantic relation to a pair of subject and object entities.
Current relation classification models employ additional procedures to identify multiple relations in a single sentence.
We propose a multiple relations classification model that tackles these issues through a customized output architecture and by exploiting additional input features.
arXiv Detail & Related papers (2023-09-24T18:36:22Z) - More than Classification: A Unified Framework for Event Temporal
Relation Extraction [61.44799147458621]
Event temporal relation extraction(ETRE) is usually formulated as a multi-label classification task.
We observe that all relations can be interpreted using the start and end time points of events.
We propose a unified event temporal relation extraction framework, which transforms temporal relations into logical expressions of time points.
arXiv Detail & Related papers (2023-05-28T02:09:08Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - None Class Ranking Loss for Document-Level Relation Extraction [22.173080823450498]
Document-level relation extraction (RE) aims at extracting relations among entities expressed across multiple sentences.
In a typical document, most entity pairs do not express any pre-defined relation and are labeled as "none" or "no relation"
arXiv Detail & Related papers (2022-05-01T14:24:37Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Few-shot Learning for Multi-label Intent Detection [59.66787898744991]
State-of-the-art work estimates label-instance relevance scores and uses a threshold to select multiple associated intent labels.
Experiments on two datasets show that the proposed model significantly outperforms strong baselines in both one-shot and five-shot settings.
arXiv Detail & Related papers (2020-10-11T14:42:18Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.