TEGRA: Text Encoding With Graph and Retrieval Augmentation for Misinformation Detection
- URL: http://arxiv.org/abs/2602.11106v2
- Date: Thu, 12 Feb 2026 08:23:24 GMT
- Title: TEGRA: Text Encoding With Graph and Retrieval Augmentation for Misinformation Detection
- Authors: Géraud Faye, Wassila Ouerdane, Guillaume Gadek, Sylvain Gatepaille, Céline Hudelot,
- Abstract summary: Our approach integrates documents by extracting structured information in the form of a graph and encoding both the text and the graph for classification purposes.<n>We demonstrate that this hybrid representation enhances misinformation detection performance compared to using language models alone.<n> Furthermore, we introduce TEGRA, an extension of our framework that incorporation domain-specific knowledge, further enhancing classification accuracy in most cases.
- Score: 5.259180998056251
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Misinformation detection is a critical task that can benefit significantly from the integration of external knowledge, much like manual fact-checking. In this work, we propose a novel method for representing textual documents that facilitates the incorporation of information from a knowledge base. Our approach, Text Encoding with Graph (TEG), processes documents by extracting structured information in the form of a graph and encoding both the text and the graph for classification purposes. Through extensive experiments, we demonstrate that this hybrid representation enhances misinformation detection performance compared to using language models alone. Furthermore, we introduce TEGRA, an extension of our framework that integrates domain-specific knowledge, further enhancing classification accuracy in most cases.
Related papers
- DGP: A Dual-Granularity Prompting Framework for Fraud Detection with Graph-Enhanced LLMs [55.13817504780764]
Real-world fraud detection applications benefit from graph learning techniques that jointly exploit node features, often rich in textual data, and graph structural information.<n>Graph-Enhanced LLMs emerge as a promising graph learning approach that converts graph information into prompts.<n>We propose Dual Granularity Prompting (DGP), which mitigates information overload by preserving fine-grained textual details for the target node.
arXiv Detail & Related papers (2025-07-29T10:10:47Z) - SE-GCL: An Event-Based Simple and Effective Graph Contrastive Learning for Text Representation [23.60337935010744]
We present an event-based, simple, and effective graph contrastive learning (SE-GCL) for text representation.<n> Precisely, we extract event blocks from text and construct internal relation graphs to represent inter-semantic interconnections.<n>In particular, we introduce the concept of an event skeleton for core representation semantics and simplify the typically complex data augmentation techniques.
arXiv Detail & Related papers (2024-12-16T10:53:24Z) - Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - Towards Unified Multi-granularity Text Detection with Interactive Attention [56.79437272168507]
"Detect Any Text" is an advanced paradigm that unifies scene text detection, layout analysis, and document page detection into a cohesive, end-to-end model.
A pivotal innovation in DAT is the across-granularity interactive attention module, which significantly enhances the representation learning of text instances.
Tests demonstrate that DAT achieves state-of-the-art performances across a variety of text-related benchmarks.
arXiv Detail & Related papers (2024-05-30T07:25:23Z) - Rule-Guided Joint Embedding Learning over Knowledge Graphs [2.797512394739081]
We propose a novel model that integrates both contextual and textual signals into entity and relation embeddings.<n>To better utilize context, we introduce two metrics: confidence, computed via a rule-based method, and relatedness, derived from textual representations.
arXiv Detail & Related papers (2023-12-01T19:58:31Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Multi-Document Scientific Summarization from a Knowledge Graph-Centric
View [9.579482432715261]
We present KGSum, an MDSS model centred on knowledge graphs during both the encoding and decoding process.
Specifically, in the encoding process, two graph-based modules are proposed to incorporate knowledge graph information into paper encoding.
In the decoding process, we propose a two-stage decoder by first generating knowledge graph information of summary in the form of descriptive sentences, followed by generating the final summary.
arXiv Detail & Related papers (2022-09-09T14:20:59Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.