Annotation-Inspired Implicit Discourse Relation Classification with
Auxiliary Discourse Connective Generation
- URL: http://arxiv.org/abs/2306.06480v1
- Date: Sat, 10 Jun 2023 16:38:46 GMT
- Title: Annotation-Inspired Implicit Discourse Relation Classification with
Auxiliary Discourse Connective Generation
- Authors: Wei Liu and Michael Strube
- Abstract summary: Implicit discourse relation classification is a challenging task due to the absence of discourse connectives.
We design an end-to-end neural model to explicitly generate discourse connectives for the task, inspired by the annotation process of PDTB.
Specifically, our model jointly learns to generate discourse connectives between arguments and predict discourse relations based on the arguments and the generated connectives.
- Score: 14.792252724959383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit discourse relation classification is a challenging task due to the
absence of discourse connectives. To overcome this issue, we design an
end-to-end neural model to explicitly generate discourse connectives for the
task, inspired by the annotation process of PDTB. Specifically, our model
jointly learns to generate discourse connectives between arguments and predict
discourse relations based on the arguments and the generated connectives. To
prevent our relation classifier from being misled by poor connectives generated
at the early stage of training while alleviating the discrepancy between
training and inference, we adopt Scheduled Sampling to the joint learning. We
evaluate our method on three benchmarks, PDTB 2.0, PDTB 3.0, and PCC. Results
show that our joint model significantly outperforms various baselines on three
datasets, demonstrating its superiority for the task.
Related papers
- Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - Pre-trained Sentence Embeddings for Implicit Discourse Relation
Classification [26.973476248983477]
Implicit discourse relations bind smaller linguistic units into coherent texts.
We explore the utility of pre-trained sentence embeddings as base representations in a neural network for implicit discourse relation sense classification.
arXiv Detail & Related papers (2022-10-20T04:17:03Z) - Prompt-based Connective Prediction Method for Fine-grained Implicit
Discourse Relation Recognition [34.02125358302028]
We propose a novel Prompt-based Connective Prediction (PCP) method for IDRR.
Our method instructs large-scale pre-trained models to use knowledge relevant to discourse relation.
Experimental results show that our method surpasses the current state-of-the-art model.
arXiv Detail & Related papers (2022-10-13T13:47:13Z) - Distant finetuning with discourse relations for stance classification [55.131676584455306]
We propose a new method to extract data with silver labels from raw text to finetune a model for stance classification.
We also propose a 3-stage training framework where the noisy level in the data used for finetuning decreases over different stages.
Our approach ranks 1st among 26 competing teams in the stance classification track of the NLPCC 2021 shared task Argumentative Text Understanding for AI Debater.
arXiv Detail & Related papers (2022-04-27T04:24:35Z) - Self-supervised Contrastive Cross-Modality Representation Learning for
Spoken Question Answering [29.545937716796082]
Spoken question answering (SQA) requires fine-grained understanding of both spoken documents and questions.
We propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage.
Our model achieves state-of-the-art results on three SQA benchmarks.
arXiv Detail & Related papers (2021-09-08T01:13:14Z) - Paired Examples as Indirect Supervision in Latent Decision Models [109.76417071249945]
We introduce a way to leverage paired examples that provide stronger cues for learning latent decisions.
We apply our method to improve compositional question answering using neural module networks on the DROP dataset.
arXiv Detail & Related papers (2021-04-05T03:58:30Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Leveraging Semantic Parsing for Relation Linking over Knowledge Bases [80.99588366232075]
We present SLING, a relation linking framework which leverages semantic parsing using AMR and distant supervision.
SLING integrates multiple relation linking approaches that capture complementary signals such as linguistic cues, rich semantic representation, and information from the knowledgebase.
experiments on relation linking using three KBQA datasets; QALD-7, QALD-9, and LC-QuAD 1.0 demonstrate that the proposed approach achieves state-of-the-art performance on all benchmarks.
arXiv Detail & Related papers (2020-09-16T14:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.