Prompt-based Connective Prediction Method for Fine-grained Implicit
Discourse Relation Recognition
- URL: http://arxiv.org/abs/2210.07032v2
- Date: Sun, 16 Oct 2022 05:33:52 GMT
- Title: Prompt-based Connective Prediction Method for Fine-grained Implicit
Discourse Relation Recognition
- Authors: Hao Zhou, Man Lan, Yuanbin Wu, Yuefeng Chen and Meirong Ma
- Abstract summary: We propose a novel Prompt-based Connective Prediction (PCP) method for IDRR.
Our method instructs large-scale pre-trained models to use knowledge relevant to discourse relation.
Experimental results show that our method surpasses the current state-of-the-art model.
- Score: 34.02125358302028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the absence of connectives, implicit discourse relation recognition
(IDRR) is still a challenging and crucial task in discourse analysis. Most of
the current work adopted multi-task learning to aid IDRR through explicit
discourse relation recognition (EDRR) or utilized dependencies between
discourse relation labels to constrain model predictions. But these methods
still performed poorly on fine-grained IDRR and even utterly misidentified on
most of the few-shot discourse relation classes. To address these problems, we
propose a novel Prompt-based Connective Prediction (PCP) method for IDRR. Our
method instructs large-scale pre-trained models to use knowledge relevant to
discourse relation and utilizes the strong correlation between connectives and
discourse relation to help the model recognize implicit discourse relations.
Experimental results show that our method surpasses the current
state-of-the-art model and achieves significant improvements on those
fine-grained few-shot discourse relation. Moreover, our approach is able to be
transferred to EDRR and obtain acceptable results. Our code is released in
https://github.com/zh-i9/PCP-for-IDRR.
Related papers
- Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Annotation-Inspired Implicit Discourse Relation Classification with
Auxiliary Discourse Connective Generation [14.792252724959383]
Implicit discourse relation classification is a challenging task due to the absence of discourse connectives.
We design an end-to-end neural model to explicitly generate discourse connectives for the task, inspired by the annotation process of PDTB.
Specifically, our model jointly learns to generate discourse connectives between arguments and predict discourse relations based on the arguments and the generated connectives.
arXiv Detail & Related papers (2023-06-10T16:38:46Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument
Extraction [22.746071199667146]
Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document.
We propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages.
In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models.
arXiv Detail & Related papers (2022-05-01T16:03:54Z) - Conversational speech recognition leveraging effective fusion methods
for cross-utterance language modeling [12.153618111267514]
We put forward disparate conversation history fusion methods for language modeling in automatic speech recognition.
A novel audio-fusion mechanism is introduced, which manages to fuse and utilize the acoustic embeddings of a current utterance and the semantic content of its corresponding conversation history.
To flesh out our ideas, we frame the ASR N-best hypothesis rescoring task as a prediction problem, leveraging BERT, an iconic pre-trained LM.
arXiv Detail & Related papers (2021-11-05T09:07:23Z) - One-shot Learning for Temporal Knowledge Graphs [49.41854171118697]
We propose a one-shot learning framework for link prediction in temporal knowledge graphs.
Our proposed method employs a self-attention mechanism to effectively encode temporal interactions between entities.
Our experiments show that the proposed algorithm outperforms the state of the art baselines for two well-studied benchmarks.
arXiv Detail & Related papers (2020-10-23T03:24:44Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.