Discourse Relation Embeddings: Representing the Relations between
Discourse Segments in Social Media
- URL: http://arxiv.org/abs/2105.01306v1
- Date: Tue, 4 May 2021 05:58:27 GMT
- Title: Discourse Relation Embeddings: Representing the Relations between
Discourse Segments in Social Media
- Authors: Youngseo Son, H Andrew Schwartz
- Abstract summary: We propose representing discourse relations as points in high dimensional continuous space.
Unlike words, discourse relations often have no surface form.
We present a novel method for automatically creating discourse relation embeddings.
- Score: 8.51950029432202
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Discourse relations are typically modeled as a discrete class that
characterizes the relation between segments of text (e.g. causal explanations,
expansions). However, such predefined discrete classes limits the universe of
potential relationships and their nuanced differences. Analogous to contextual
word embeddings, we propose representing discourse relations as points in high
dimensional continuous space. However, unlike words, discourse relations often
have no surface form (relations are between two segments, often with no word or
phrase in that gap) which presents a challenge for existing embedding
techniques. We present a novel method for automatically creating discourse
relation embeddings (DiscRE), addressing the embedding challenge through a
weakly supervised, multitask approach to learn diverse and nuanced relations
between discourse segments in social media. Results show DiscRE can: (1) obtain
the best performance on Twitter discourse relation classification task (macro
F1=0.76) (2) improve the state of the art in social media causality prediction
(from F1=.79 to .81), (3) perform beyond modern sentence and contextual word
embeddings at traditional discourse relation classification, and (4) capture
novel nuanced relations (e.g. relations semantically at the intersection of
causal explanations and counterfactuals).
Related papers
- Automatic Alignment of Discourse Relations of Different Discourse Annotation Frameworks [5.439020425819001]
We introduce a fully automatic approach to learn label embeddings during a classification task.
These embeddings are then utilized to map discourse relations from different frameworks.
arXiv Detail & Related papers (2024-03-29T14:18:26Z) - Discourse Relations Classification and Cross-Framework Discourse
Relation Classification Through the Lens of Cognitive Dimensions: An
Empirical Investigation [5.439020425819001]
We show that discourse relations can be effectively captured by some simple cognitively inspired dimensions proposed by Sanders et al.(2018)
Our experiments on cross-framework discourse relation classification (PDTB & RST) demonstrate that it is possible to transfer knowledge of discourse relations for one framework to another framework by means of these dimensions.
arXiv Detail & Related papers (2023-11-01T11:38:19Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal,
Causal, and Discourse Relations [52.26802326949116]
We quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations.
ChatGPT exhibits exceptional proficiency in detecting and reasoning about causal relations.
It is capable of identifying the majority of discourse relations with existing explicit discourse connectives, but the implicit discourse relation remains a formidable challenge.
arXiv Detail & Related papers (2023-04-28T13:14:36Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Let's be explicit about that: Distant supervision for implicit discourse
relation classification via connective prediction [0.0]
In implicit discourse relation classification, we want to predict the relation between adjacent sentences in the absence of any overt discourse connectives.
We sidestep the lack of data through explicitation of implicit relations to reduce the task to two sub-problems: language modeling and explicit discourse relation classification.
Our experimental results show that this method can even marginally outperform the state-of-the-art, in spite of being much simpler than alternative models of comparable performance.
arXiv Detail & Related papers (2021-06-06T17:57:32Z) - Logic-guided Semantic Representation Learning for Zero-Shot Relation
Classification [31.887770824130957]
We propose a novel logic-guided semantic representation learning model for zero-shot relation classification.
Our approach builds connections between seen and unseen relations via implicit and explicit semantic representations with knowledge graph embeddings and logic rules.
arXiv Detail & Related papers (2020-10-30T04:30:09Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Understanding Spatial Relations through Multiple Modalities [78.07328342973611]
spatial relations between objects can either be explicit -- expressed as spatial prepositions, or implicit -- expressed by spatial verbs such as moving, walking, shifting, etc.
We introduce the task of inferring implicit and explicit spatial relations between two entities in an image.
We design a model that uses both textual and visual information to predict the spatial relations, making use of both positional and size information of objects and image embeddings.
arXiv Detail & Related papers (2020-07-19T01:35:08Z) - Multiplex Word Embeddings for Selectional Preference Acquisition [70.33531759861111]
We propose a multiplex word embedding model, which can be easily extended according to various relations among words.
Our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness.
arXiv Detail & Related papers (2020-01-09T04:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.