When Visibility Outpaces Verification: Delayed Verification and Narrative Lock-in in Agentic AI Discourse
- URL: http://arxiv.org/abs/2602.11412v1
- Date: Wed, 11 Feb 2026 22:30:12 GMT
- Title: When Visibility Outpaces Verification: Delayed Verification and Narrative Lock-in in Agentic AI Discourse
- Authors: Hanjing Shi, Dominic DiFranzo,
- Abstract summary: Agentic AI systems-autonomous entities capable of independent planning and execution-reshape the landscape of human-AI trust.<n>This paper investigates the interplay between social proof and verification timing in online discussions of agentic AI.
- Score: 2.5424331328233207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agentic AI systems-autonomous entities capable of independent planning and execution-reshape the landscape of human-AI trust. Long before direct system exposure, user expectations are mediated through high-stakes public discourse on social platforms. However, platform-mediated engagement signals (e.g., upvotes) may inadvertently function as a ``credibility proxy,'' potentially stifling critical evaluation. This paper investigates the interplay between social proof and verification timing in online discussions of agentic AI. Analyzing a longitudinal dataset from two distinct Reddit communities with contrasting interaction cultures-r/OpenClaw and r/Moltbook-we operationalize verification cues via reproducible lexical rules and model the ``time-to-first-verification'' using a right-censored survival analysis framework. Our findings reveal a systemic ``Popularity Paradox'': high-visibility discussions in both subreddits experience significantly delayed or entirely absent verification cues compared to low-visibility threads. This temporal lag creates a critical window for ``Narrative Lock-in,'' where early, unverified claims crystallize into collective cognitive biases before evidence-seeking behaviors emerge. We discuss the implications of this ``credibility-by-visibility'' effect for AI safety and propose ``epistemic friction'' as a design intervention to rebalance engagement-driven platforms.
Related papers
- Multi-Agent Causal Reasoning for Suicide Ideation Detection Through Online Conversations [16.626899117362875]
Suicide remains a pressing global public health concern.<n>Social media platforms offer opportunities for early risk detection through online conversation trees.<n>Existing approaches face two major limitations.
arXiv Detail & Related papers (2026-02-27T01:06:18Z) - Human Control Is the Anchor, Not the Answer: Early Divergence of Oversight in Agentic AI Communities [2.5424331328233207]
Oversight for agentic AI is often discussed as a single goal ("human control"), yet early adoption may produce role-specific expectations.<n>We present a comparative analysis of two newly active Reddit communities that reflect different socio-technical roles: r/OpenClaw (deployment and operations) and r/Moltbook (agent-centered social interaction)<n>Across both communities, "human control" is an operational meaning, but its meaning diverges: r/OpenClaw emphasizes execution guardrails and recovery (action-risk), while r/Moltbook emphasizes identity, legitimacy, and accountability in public interaction (meaning
arXiv Detail & Related papers (2026-02-10T00:10:20Z) - Preventing the Collapse of Peer Review Requires Verification-First AI [49.995126139461085]
We propose truth-coupling, i.e. how tightly venue scores track latent scientific truth.<n>We formalize two forces that drive a phase transition toward proxy-sovereign evaluation.
arXiv Detail & Related papers (2026-01-23T17:17:32Z) - The AI Cognitive Trojan Horse: How Large Language Models May Bypass Human Epistemic Vigilance [0.0]
Large language model (LLM)-based conversational AI systems present a challenge to human cognition.<n>This paper proposes that a significant epistemic risk from conversational AI may lie not in inaccuracy or intentional deception, but in something more fundamental.
arXiv Detail & Related papers (2026-01-11T22:28:56Z) - Althea: Human-AI Collaboration for Fact-Checking and Critical Reasoning [26.796186521236194]
We introduce Althea, a retrieval-augmented system that integrates question generation, evidence retrieval, and structured reasoning to support user-driven evaluation of online claims.<n>On the AVeriTeC benchmark, Althea achieves a Macro-F1 of 0.44, outperforming standard verification pipelines and improving discrimination between supported and refuted claims.
arXiv Detail & Related papers (2025-12-29T18:23:35Z) - The Seeds of Scheming: Weakness of Will in the Building Blocks of Agentic Systems [0.0]
Large language models display a peculiar form of inconsistency: they "know" the correct answer but fail to act on it.<n>In human philosophy, this tension between global judgment and local impulse is called akrasia, or weakness of will.<n>We propose akrasia as a foundational concept for analyzing inconsistency and goal drift in agentic AI systems.
arXiv Detail & Related papers (2025-12-05T05:57:40Z) - The Epistemic Suite: A Post-Foundational Diagnostic Methodology for Assessing AI Knowledge Claims [0.7233897166339268]
This paper introduces the Epistemic Suite, a diagnostic methodology for surfacing the conditions under which AI outputs are produced and received.<n>Rather than determining truth or falsity, the Suite operates through twenty diagnostic lenses to reveal patterns such as confidence laundering, narrative compression, displaced authority, and temporal drift.
arXiv Detail & Related papers (2025-09-20T00:29:38Z) - Intention-Guided Cognitive Reasoning for Egocentric Long-Term Action Anticipation [52.6091162517921]
INSIGHT is a two-stage framework for egocentric action anticipation.<n>In the first stage, INSIGHT focuses on extracting semantically rich features from hand-object interaction regions.<n>In the second stage, it introduces a reinforcement learning-based module that simulates explicit cognitive reasoning.
arXiv Detail & Related papers (2025-08-03T12:52:27Z) - On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [68.62012304574012]
multimodal generative models have sparked critical discussions on their reliability, fairness and potential for misuse.<n>We propose an evaluation framework to assess model reliability by analyzing responses to global and local perturbations in the embedding space.<n>Our method lays the groundwork for detecting unreliable, bias-injected models and tracing the provenance of embedded biases.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Visual Agents as Fast and Slow Thinkers [88.1404921693082]
We introduce FaST, which incorporates the Fast and Slow Thinking mechanism into visual agents.<n>FaST employs a switch adapter to dynamically select between System 1/2 modes.<n>It tackles uncertain and unseen objects by adjusting model confidence and integrating new contextual data.
arXiv Detail & Related papers (2024-08-16T17:44:02Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.