Verifying Rumors via Stance-Aware Structural Modeling
- URL: http://arxiv.org/abs/2512.13559v1
- Date: Mon, 15 Dec 2025 17:16:56 GMT
- Title: Verifying Rumors via Stance-Aware Structural Modeling
- Authors: Gibson Nkhata, Uttamasha Anjally Oyshi, Quan Mai, Susan Gauch,
- Abstract summary: We propose a stance-aware structural modeling that encodes each post in a discourse with its stance signal and aggregates reply embedddings by stance category.<n>Our approach significantly outperforms prior methods in the ability to predict truthfulness of a rumor.
- Score: 1.5499426028105903
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Verifying rumors on social media is critical for mitigating the spread of false information. The stances of conversation replies often provide important cues to determine a rumor's veracity. However, existing models struggle to jointly capture semantic content, stance information, and conversation strructure, especially under the sequence length constraints of transformer-based encoders. In this work, we propose a stance-aware structural modeling that encodes each post in a discourse with its stance signal and aggregates reply embedddings by stance category enabling a scalable and semantically enriched representation of the entire thread. To enhance structural awareness, we introduce stance distribution and hierarchical depth as covariates, capturing stance imbalance and the influence of reply depth. Extensive experiments on benchmark datasets demonstrate that our approach significantly outperforms prior methods in the ability to predict truthfulness of a rumor. We also demonstrate that our model is versatile for early detection and cross-platfrom generalization.
Related papers
- Scaling Dense Event-Stream Pretraining from Visual Foundation Models [112.44243079477137]
We launch a novel self-supervised pretraining method that distills visual foundation models (VFMs) to push the boundaries of event representation at scale.<n>We curate an extensive synchronized image-event collection to amplify cross-modal alignment.<n>We extend the alignment objective to semantic structures provided off-the-shelf by VFMs, indicating a broader receptive field and stronger supervision.
arXiv Detail & Related papers (2026-03-04T12:06:09Z) - Plug-and-Play Clarifier: A Zero-Shot Multimodal Framework for Egocentric Intent Disambiguation [60.63465682731118]
The performance of egocentric AI agents is fundamentally limited by multimodal intent ambiguity.<n>We introduce the Plug-and-Play Clarifier, a zero-shot and modular framework that decomposes the problem into discrete, solvable sub-tasks.<n>Our framework improves the intent clarification performance of small language models by approximately 30%, making them competitive with significantly larger counterparts.
arXiv Detail & Related papers (2025-11-12T04:28:14Z) - Mind-the-Glitch: Visual Correspondence for Detecting Inconsistencies in Subject-Driven Generation [120.23172120151821]
We propose a novel approach for disentangling visual and semantic features from the backbones of pre-trained diffusion models.<n>We introduce an automated pipeline that constructs image pairs with annotated semantic and visual correspondences.<n>We propose a new metric, Visual Semantic Matching, that quantifies visual inconsistencies in subject-driven image generation.
arXiv Detail & Related papers (2025-09-26T07:11:55Z) - How do Transformers Learn Implicit Reasoning? [67.02072851088637]
We study how implicit multi-hop reasoning emerges by training transformers from scratch in a controlled symbolic environment.<n>We find that training with atomic triples is not necessary but accelerates learning, and that second-hop generalization relies on query-level exposure to specific compositional structures.
arXiv Detail & Related papers (2025-05-29T17:02:49Z) - Investigating Disentanglement in a Phoneme-level Speech Codec for Prosody Modeling [39.80957479349776]
We investigate the prosody modeling capabilities of the discrete space of an RVQ-VAE model, modifying it to operate on the phoneme-level.
We show that the phoneme-level discrete latent representations achieves a high degree of disentanglement, capturing fine-grained prosodic information that is robust and transferable.
arXiv Detail & Related papers (2024-09-13T09:27:05Z) - Semantic Evolvement Enhanced Graph Autoencoder for Rumor Detection [25.03964361177406]
We propose a novel semantic evolvement enhanced Graph Autoencoder for Rumor Detection (GARD) model in this paper.
The model learns semantic evolvement information of events by capturing local semantic changes and global semantic evolvement information.
In order to enhance the model's ability to learn the distinct patterns of rumors and non-rumors, we introduce a regularizer to further improve the model's performance.
arXiv Detail & Related papers (2024-04-24T05:05:58Z) - Rate-Distortion-Perception Theory for Semantic Communication [73.04341519955223]
We study the achievable data rate of semantic communication under the symbol distortion and semantic perception constraints.
We observe that there exists cases that the receiver can directly infer the semantic information source satisfying certain distortion and perception constraints.
arXiv Detail & Related papers (2023-12-09T02:04:32Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - A Unified Contrastive Transfer Framework with Propagation Structure for
Boosting Low-Resource Rumor Detection [11.201348902221257]
existing rumor detection algorithms show promising performance on yesterday's news.
Due to a lack of substantial training data and prior expert knowledge, they are poor at spotting rumors concerning unforeseen events.
We propose a unified contrastive transfer framework to detect rumors by adapting the features learned from well-resourced rumor data to that of the low-resourced with only few-shot annotations.
arXiv Detail & Related papers (2023-04-04T03:13:03Z) - Contextual information integration for stance detection via
cross-attention [59.662413798388485]
Stance detection deals with identifying an author's stance towards a target.
Most existing stance detection models are limited because they do not consider relevant contextual information.
We propose an approach to integrate contextual information as text.
arXiv Detail & Related papers (2022-11-03T15:04:29Z) - Analysis of Joint Speech-Text Embeddings for Semantic Matching [3.6423306784901235]
We study a joint speech-text embedding space trained for semantic matching by minimizing the distance between paired utterance and transcription inputs.
We extend our method to incorporate automatic speech recognition through both pretraining and multitask scenarios.
arXiv Detail & Related papers (2022-04-04T04:50:32Z) - Focus-Constrained Attention Mechanism for CVAE-based Response Generation [27.701626908931267]
latent variable is supposed to capture the discourse-level information and encourage the informativeness of target responses.
We transform the coarse-grained discourse-level information into fine-grained word-level information.
Our model can generate more diverse and informative responses compared with several state-of-the-art models.
arXiv Detail & Related papers (2020-09-25T09:38:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.