Let's be explicit about that: Distant supervision for implicit discourse
relation classification via connective prediction
- URL: http://arxiv.org/abs/2106.03192v1
- Date: Sun, 6 Jun 2021 17:57:32 GMT
- Title: Let's be explicit about that: Distant supervision for implicit discourse
relation classification via connective prediction
- Authors: Murathan Kurfal{\i} and Robert \"Ostling
- Abstract summary: In implicit discourse relation classification, we want to predict the relation between adjacent sentences in the absence of any overt discourse connectives.
We sidestep the lack of data through explicitation of implicit relations to reduce the task to two sub-problems: language modeling and explicit discourse relation classification.
Our experimental results show that this method can even marginally outperform the state-of-the-art, in spite of being much simpler than alternative models of comparable performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In implicit discourse relation classification, we want to predict the
relation between adjacent sentences in the absence of any overt discourse
connectives. This is challenging even for humans, leading to shortage of
annotated data, a fact that makes the task even more difficult for supervised
machine learning approaches. In the current study, we perform implicit
discourse relation classification without relying on any labeled implicit
relation. We sidestep the lack of data through explicitation of implicit
relations to reduce the task to two sub-problems: language modeling and
explicit discourse relation classification, a much easier problem. Our
experimental results show that this method can even marginally outperform the
state-of-the-art, in spite of being much simpler than alternative models of
comparable performance. Moreover, we show that the achieved performance is
robust across domains as suggested by the zero-shot experiments on a completely
different domain. This indicates that recent advances in language modeling have
made language models sufficiently good at capturing inter-sentence relations
without the help of explicit discourse markers.
Related papers
- DenoSent: A Denoising Objective for Self-Supervised Sentence
Representation Learning [59.4644086610381]
We propose a novel denoising objective that inherits from another perspective, i.e., the intra-sentence perspective.
By introducing both discrete and continuous noise, we generate noisy sentences and then train our model to restore them to their original form.
Our empirical evaluations demonstrate that this approach delivers competitive results on both semantic textual similarity (STS) and a wide range of transfer tasks.
arXiv Detail & Related papers (2024-01-24T17:48:45Z) - Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Pre-trained Sentence Embeddings for Implicit Discourse Relation
Classification [26.973476248983477]
Implicit discourse relations bind smaller linguistic units into coherent texts.
We explore the utility of pre-trained sentence embeddings as base representations in a neural network for implicit discourse relation sense classification.
arXiv Detail & Related papers (2022-10-20T04:17:03Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - Discourse Relation Embeddings: Representing the Relations between
Discourse Segments in Social Media [8.51950029432202]
We propose representing discourse relations as points in high dimensional continuous space.
Unlike words, discourse relations often have no surface form.
We present a novel method for automatically creating discourse relation embeddings.
arXiv Detail & Related papers (2021-05-04T05:58:27Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Logic-guided Semantic Representation Learning for Zero-Shot Relation
Classification [31.887770824130957]
We propose a novel logic-guided semantic representation learning model for zero-shot relation classification.
Our approach builds connections between seen and unseen relations via implicit and explicit semantic representations with knowledge graph embeddings and logic rules.
arXiv Detail & Related papers (2020-10-30T04:30:09Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.