Modeling Multi-level Context for Informational Bias Detection by
Contrastive Learning and Sentential Graph Network
- URL: http://arxiv.org/abs/2201.10376v1
- Date: Tue, 25 Jan 2022 15:07:09 GMT
- Title: Modeling Multi-level Context for Informational Bias Detection by
Contrastive Learning and Sentential Graph Network
- Authors: Shijia Guo, Kenny Q. Zhu
- Abstract summary: Informational bias can only be detected together with the context.
In this paper, we integrate three levels of context to detect the sentence-level informational bias in English news articles.
Our model, MultiCTX, uses contrastive learning and sentence graphs together with Graph Attention Network (GAT) to encode these three degrees of context.
- Score: 13.905580921329717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Informational bias is widely present in news articles. It refers to providing
one-sided, selective or suggestive information of specific aspects of certain
entity to guide a specific interpretation, thereby biasing the reader's
opinion. Sentence-level informational bias detection is a very challenging task
in a way that such bias can only be revealed together with the context,
examples include collecting information from various sources or analyzing the
entire article in combination with the background. In this paper, we integrate
three levels of context to detect the sentence-level informational bias in
English news articles: adjacent sentences, whole article, and articles from
other news outlets describing the same event. Our model, MultiCTX (Multi-level
ConTeXt), uses contrastive learning and sentence graphs together with Graph
Attention Network (GAT) to encode these three degrees of context at different
stages by tactically composing contrastive triplets and constructing sentence
graphs within events. Our experiments proved that contrastive learning together
with sentence graphs effectively incorporates context in varying degrees and
significantly outperforms the current SOTA model sentence-wise in informational
bias detection.
Related papers
- Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - Understanding Position Bias Effects on Fairness in Social Multi-Document Summarization [1.9950682531209158]
We investigate the effect of group ordering in input documents when summarizing tweets from three linguistic communities.
Our results suggest that position bias manifests differently in social multi-document summarization.
arXiv Detail & Related papers (2024-05-03T00:19:31Z) - Open-Domain Event Graph Induction for Mitigating Framing Bias [89.46744219887005]
We argue that studying and identifying framing bias is a crucial step towards trustworthy event understanding.
We propose a novel task, neutral event graph induction, to address this problem.
Our task aims to induce such structural knowledge with minimal framing bias in an open domain.
arXiv Detail & Related papers (2023-05-22T08:57:42Z) - Interpretable Detection of Out-of-Context Misinformation with Neural-Symbolic-Enhanced Large Multimodal Model [16.348950072491697]
Misinformation creators now more tend to use out-of- multimedia contents to deceive the public and fake news detection systems.
This new type of misinformation increases the difficulty of not only detection but also clarification, because every individual modality is close enough to true information.
In this paper we explore how to achieve interpretable cross-modal de-contextualization detection that simultaneously identifies the mismatched pairs and the cross-modal contradictions.
arXiv Detail & Related papers (2023-04-15T21:11:55Z) - Unsupervised Extractive Summarization with Heterogeneous Graph
Embeddings for Chinese Document [5.9630342951482085]
We propose an unsupervised extractive summarizaiton method with heterogeneous graph embeddings (HGEs) for Chinese document.
Experimental results demonstrate that our method consistently outperforms the strong baseline in three summarization datasets.
arXiv Detail & Related papers (2022-11-09T06:07:31Z) - Contextual information integration for stance detection via
cross-attention [59.662413798388485]
Stance detection deals with identifying an author's stance towards a target.
Most existing stance detection models are limited because they do not consider relevant contextual information.
We propose an approach to integrate contextual information as text.
arXiv Detail & Related papers (2022-11-03T15:04:29Z) - Enhanced Knowledge Selection for Grounded Dialogues via Document
Semantic Graphs [123.50636090341236]
We propose to automatically convert background knowledge documents into document semantic graphs.
Our document semantic graphs preserve sentence-level information through the use of sentence nodes and provide concept connections between sentences.
Our experiments show that our semantic graph-based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE.
arXiv Detail & Related papers (2022-06-15T04:51:32Z) - KCD: Knowledge Walks and Textual Cues Enhanced Political Perspective
Detection in News Media [28.813287482918344]
We propose KCD, a political perspective detection approach to enable multi-hop knowledge reasoning.
Specifically, we generate random walks on external knowledge graphs and infuse them with news text representations.
We then construct a heterogeneous information network to jointly model news content as well as semantic, syntactic and entity cues in news articles.
arXiv Detail & Related papers (2022-04-08T13:06:09Z) - Context in Informational Bias Detection [4.386026071380442]
We explore four kinds of context for informational bias in English news articles.
We find that integrating event context improves classification performance over a very strong baseline.
We find that the best-performing context-inclusive model outperforms the baseline on longer sentences.
arXiv Detail & Related papers (2020-12-03T15:50:20Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.