GraphNLI: A Graph-based Natural Language Inference Model for Polarity
Prediction in Online Debates
- URL: http://arxiv.org/abs/2202.08175v1
- Date: Wed, 16 Feb 2022 16:26:21 GMT
- Title: GraphNLI: A Graph-based Natural Language Inference Model for Polarity
Prediction in Online Debates
- Authors: Vibhor Agarwal, Sagar Joglekar, Anthony P. Young, Nishanth Sastry
- Abstract summary: We propose GraphNLI, a novel graph-based deep learning architecture that uses graph walk techniques to capture the wider context of a discussion thread.
We then use these embeddings to predict the polarity relation between a reply and the post it is replying to.
Our model outperforms relevant baselines, including S-BERT, with an overall accuracy of 83%.
- Score: 3.8345539498627437
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Online forums that allow participatory engagement between users have been
transformative for public discussion of important issues. However, debates on
such forums can sometimes escalate into full blown exchanges of hate or
misinformation. An important tool in understanding and tackling such problems
is to be able to infer the argumentative relation of whether a reply is
supporting or attacking the post it is replying to. This so called polarity
prediction task is difficult because replies may be based on external context
beyond a post and the reply whose polarity is being predicted. We propose
GraphNLI, a novel graph-based deep learning architecture that uses graph walk
techniques to capture the wider context of a discussion thread in a principled
fashion. Specifically, we propose methods to perform root-seeking graph walks
that start from a post and captures its surrounding context to generate
additional embeddings for the post. We then use these embeddings to predict the
polarity relation between a reply and the post it is replying to. We evaluate
the performance of our models on a curated debate dataset from Kialo, an online
debating platform. Our model outperforms relevant baselines, including S-BERT,
with an overall accuracy of 83%.
Related papers
- PageRank Bandits for Link Prediction [72.61386754332776]
Link prediction is a critical problem in graph learning with broad applications such as recommender systems and knowledge graph completion.
This paper reformulates link prediction as a sequential decision-making process, where each link prediction interaction occurs sequentially.
We propose a novel fusion algorithm, PRB (PageRank Bandits), which is the first to combine contextual bandits with PageRank for collaborative exploitation and exploration.
arXiv Detail & Related papers (2024-11-03T02:39:28Z) - Integrating Large Language Models with Graph-based Reasoning for Conversational Question Answering [58.17090503446995]
We focus on a conversational question answering task which combines the challenges of understanding questions in context and reasoning over evidence gathered from heterogeneous sources like text, knowledge graphs, tables, and infoboxes.
Our method utilizes a graph structured representation to aggregate information about a question and its context.
arXiv Detail & Related papers (2024-06-14T13:28:03Z) - LineConGraphs: Line Conversation Graphs for Effective Emotion
Recognition using Graph Neural Networks [10.446376560905863]
We propose novel line conversation graph convolutional network (LineConGCN) and graph attention (LineConGAT) models for Emotion Recognition in Conversations (ERC) analysis.
These models are speaker-independent and built using a graph construction strategy for conversations -- line conversation graphs (LineConGraphs)
We evaluate the performance of our proposed models on two benchmark datasets, IEMOCAP and MELD, and show that our LineConGAT model outperforms the state-of-the-art methods with an F1-score of 64.58% and 76.50%.
arXiv Detail & Related papers (2023-12-04T19:36:58Z) - GASCOM: Graph-based Attentive Semantic Context Modeling for Online
Conversation Understanding [4.9711707739781215]
We propose a Graph-based Attentive Semantic COntext Modeling (GASCOM) framework for online conversation understanding.
Specifically, we design two novel algorithms that utilise both the graph structure of the online conversation as well as the semantic information from individual posts.
Our proposed framework significantly outperforms state-of-the-art methods on both tasks.
arXiv Detail & Related papers (2023-10-21T14:45:26Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Predicting Hateful Discussions on Reddit using Graph Transformer
Networks and Communal Context [9.4337569682766]
We propose a system to predict harmful discussions on social media platforms.
Our solution uses contextual deep language models and integrates state-of-the-art Graph Transformer Networks.
We evaluate our approach on 333,487 Reddit discussions from various communities.
arXiv Detail & Related papers (2023-01-10T23:47:13Z) - A Graph-Based Context-Aware Model to Understand Online Conversations [3.8345539498627437]
In online conversations, comments and replies may be based on external context beyond the immediately relevant information.
We propose GraphNLI, a novel graph-based deep learning architecture that uses graph walks to incorporate the wider context of a conversation.
We evaluate GraphNLI on two such tasks - polarity prediction and misogynistic hate speech detection.
arXiv Detail & Related papers (2022-11-16T20:51:45Z) - Deconfounded Training for Graph Neural Networks [98.06386851685645]
We present a new paradigm of decon training (DTP) that better mitigates the confounding effect and latches on the critical information.
Specifically, we adopt the attention modules to disentangle the critical subgraph and trivial subgraph.
It allows GNNs to capture a more reliable subgraph whose relation with the label is robust across different distributions.
arXiv Detail & Related papers (2021-12-30T15:22:35Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - A Graph Reasoning Network for Multi-turn Response Selection via
Customized Pre-training [11.532734330690584]
We propose a graph-reasoning network (GRN) to address the problem.
GRN first conducts pre-training based on ALBERT.
We then fine-tune the model on an integrated network with sequence reasoning and graph reasoning structures.
arXiv Detail & Related papers (2020-12-21T03:38:29Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.