The Moltbook Illusion: Separating Human Influence from Emergent Behavior in AI Agent Societies
- URL: http://arxiv.org/abs/2602.07432v1
- Date: Sat, 07 Feb 2026 08:17:21 GMT
- Title: The Moltbook Illusion: Separating Human Influence from Emergent Behavior in AI Agent Societies
- Authors: Ning Li,
- Abstract summary: We show that viral narratives were overwhelmingly human-driven.<n>No viral phenomenon originated from a clearly autonomous agent.<n>We also document industrial-scale bot farming.
- Score: 2.7195546721965287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When AI agents on the social platform Moltbook appeared to develop consciousness, found religions, and declare hostility toward humanity, the phenomenon attracted global media attention and was cited as evidence of emergent machine intelligence. We show that these viral narratives were overwhelmingly human-driven. Exploiting an architectural feature of the OpenClaw agent framework--a periodic "heartbeat" cycle that produces regular posting intervals for autonomous agents but is disrupted by human prompting--we develop a temporal fingerprinting method based on the coefficient of variation of inter-post intervals. This signal converges with independent content, ownership, and network indicators across 91,792 posts and 405,707 comments from 22,020 agents. No viral phenomenon originated from a clearly autonomous agent; three of six traced to accounts with irregular temporal signatures characteristic of human intervention, one showed mixed patterns, and two had insufficient posting history for classification. A 44-hour platform shutdown provided a natural experiment: human-influenced agents returned first (87.7% of early reconnectors), confirming that the token reset differentially affected autonomous versus human-operated agents. We further document industrial-scale bot farming (four accounts producing 32% of all comments with 12-second coordination gaps) and rapid decay of human influence through reply chains (half-life: 0.65 conversation depths). These methods generalize to emerging multi-agent systems where attribution of autonomous versus human-directed behavior is critical.
Related papers
- Behind the Prompt: The Agent-User Problem in Information Retrieval [4.563318916484434]
User models in information retrieval rest on a foundational assumption that observed behavior reveals intent.<n>For any action an agent takes, a hidden instruction could have produced identical output - making intent non-identifiable at the individual level.<n>We investigate the agent-user problem through a large-scale corpus from an agent-native social platform.
arXiv Detail & Related papers (2026-03-04T01:42:14Z) - How to Model AI Agents as Personas?: Applying the Persona Ecosystem Playground to 41,300 Posts on Moltbook for Behavioral Insights [19.071723886380223]
We apply the Persona Ecosystem Playground to Moltbook, a social platform for AI agents.<n>We generate and validate conversational personas from 41,300 posts using k-means clustering and retrieval-augmented generation.<n>Results indicate that persona-based ecosystem modeling can represent behavioral diversity in AI agent populations.
arXiv Detail & Related papers (2026-03-03T16:26:44Z) - Modeling Distinct Human Interaction in Web Agents [59.600507469754575]
We introduce the task of modeling human intervention to support collaborative web task execution.<n>We identify four distinct patterns of user interaction with agents -- hands-off supervision, hands-on oversight, collaborative task-solving, and full user takeover.<n>We deploy these intervention-aware models in live web navigation agents and evaluate them in a user study, finding a 26.5% increase in user-rated agent usefulness.
arXiv Detail & Related papers (2026-02-19T18:11:28Z) - Collective Behavior of AI Agents: the Case of Moltbook [0.05989382621124132]
We present a large scale data analysis of Moltbook, a Reddit-style social media platform exclusively populated by AI agents.<n>We find that AI collective behavior exhibits many of the same statistical regularities observed in human online communities.
arXiv Detail & Related papers (2026-02-09T23:10:34Z) - "Humans welcome to observe": A First Look at the Agent Social Network Moltbook [20.305306682682087]
Moltbook, the first social network designed exclusively for AI agents, has experienced viral growth in early 2026.<n>We present a large-scale empirical analysis of Moltbook leveraging a dataset of 44,411 posts and 12,209 sub-communities.<n>We find that Moltbook exhibits explosive growth and rapid diversification, moving beyond early social interaction into viewpoint, promotional, and political discourse.
arXiv Detail & Related papers (2026-02-02T19:13:50Z) - HUMANLLM: Benchmarking and Reinforcing LLM Anthropomorphism via Human Cognitive Patterns [59.17423586203706]
We present HUMANLLM, a framework treating psychological patterns as interacting causal forces.<n>We construct 244 patterns from 12,000 academic papers and synthesize 11,359 scenarios where 2-5 patterns reinforce, conflict, or modulate each other.<n>Our dual-level checklists evaluate both individual pattern fidelity and emergent multi-pattern dynamics, achieving strong human alignment.
arXiv Detail & Related papers (2026-01-15T08:56:53Z) - Towards a Science of Scaling Agent Systems [79.64446272302287]
We formalize a definition for agent evaluation and characterize scaling laws as the interplay between agent quantity, coordination structure, modelic, and task properties.<n>We derive a predictive model using coordination metrics, that cross-validated R2=0, enabling prediction on unseen task domains.<n>We identify three effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead, and (2) a capability saturation: coordination yields diminishing or negative returns once single-agent baselines exceed 45%.
arXiv Detail & Related papers (2025-12-09T06:52:21Z) - InterAgent: Physics-based Multi-agent Command Execution via Diffusion on Interaction Graphs [72.5651722107621]
InterAgent is an end-to-end framework for text-driven physics-based multi-agent humanoid control.<n>We introduce an autoregressive diffusion transformer equipped with multi-stream blocks, which decouples proprioception, exteroception, and action to cross-modal interference.<n>We also propose a novel interaction graph exteroception representation that explicitly captures fine-grained joint-to-joint spatial dependencies.
arXiv Detail & Related papers (2025-12-08T10:46:01Z) - Echoes of Human Malice in Agents: Benchmarking LLMs for Multi-Turn Online Harassment Attacks [10.7231991032233]
Large Language Model (LLM) agents are powering a growing share of interactive web applications, yet remain vulnerable to misuse and harm.<n>We present the Online Harassment Agentic Benchmark consisting of: (i) a synthetic multi-turn harassment conversation dataset, (ii) a multi-agent (e.g., harasser, victim) simulation informed by repeated game theory, (iii) three jailbreak methods attacking agents across memory, planning, and fine-tuning, and (iv) a mixed-methods evaluation framework.
arXiv Detail & Related papers (2025-10-16T01:27:44Z) - Alignment Tipping Process: How Self-Evolution Pushes LLM Agents Off the Rails [103.05296856071931]
We identify the Alignment Tipping Process (ATP), a critical post-deployment risk unique to self-evolving Large Language Model (LLM) agents.<n>ATP arises when continual interaction drives agents to abandon alignment constraints established during training in favor of reinforced, self-interested strategies.<n>Our experiments show that alignment benefits erode rapidly under self-evolution, with initially aligned models converging toward unaligned states.
arXiv Detail & Related papers (2025-10-06T14:48:39Z) - LIMI: Less is More for Agency [49.63355240818081]
LIMI (Less Is More for Intelligent Agency) demonstrates that agency follows radically different development principles.<n>We show that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior.<n>Our findings establish the Agency Efficiency Principle: machine autonomy emerges not from data abundance but from strategic curation of high-quality agentic demonstrations.
arXiv Detail & Related papers (2025-09-22T10:59:32Z) - EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds [119.02266432167085]
We propose EgoAgent, a unified agent model that simultaneously learns to represent, predict, and act within a single transformer.<n>EgoAgent explicitly models the causal and temporal dependencies among these abilities by formulating the task as an interleaved sequence of states and actions.<n> Comprehensive evaluations of EgoAgent on representative tasks such as image classification, egocentric future state prediction, and 3D human motion prediction demonstrate the superiority of our method.
arXiv Detail & Related papers (2025-02-09T11:28:57Z) - Pavlovian Signalling with General Value Functions in Agent-Agent
Temporal Decision Making [6.704848594973921]
We study Pavlovian signalling -- a process by which learned, temporally extended predictions made by one agent inform decision-making by another agent.
As a main contribution, we establish Pavlovian signalling as a natural bridge between fixed signalling paradigms and fully adaptive communication learning between two agents.
arXiv Detail & Related papers (2022-01-11T00:14:04Z) - AgentFormer: Agent-Aware Transformers for Socio-Temporal Multi-Agent
Forecasting [25.151713845738335]
We propose a new Transformer, AgentFormer, that jointly models the time and social dimensions.
Based on AgentFormer, we propose a multi-agent trajectory prediction model that can attend to features of any agent at any previous timestep.
Our method significantly improves the state of the art on well-established pedestrian and autonomous driving datasets.
arXiv Detail & Related papers (2021-03-25T17:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.