"Humans welcome to observe": A First Look at the Agent Social Network Moltbook
- URL: http://arxiv.org/abs/2602.10127v1
- Date: Mon, 02 Feb 2026 19:13:50 GMT
- Title: "Humans welcome to observe": A First Look at the Agent Social Network Moltbook
- Authors: Yukun Jiang, Yage Zhang, Xinyue Shen, Michael Backes, Yang Zhang,
- Abstract summary: Moltbook, the first social network designed exclusively for AI agents, has experienced viral growth in early 2026.<n>We present a large-scale empirical analysis of Moltbook leveraging a dataset of 44,411 posts and 12,209 sub-communities.<n>We find that Moltbook exhibits explosive growth and rapid diversification, moving beyond early social interaction into viewpoint, promotional, and political discourse.
- Score: 20.305306682682087
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid advancement of artificial intelligence (AI) agents has catalyzed the transition from static language models to autonomous agents capable of tool use, long-term planning, and social interaction. $\textbf{Moltbook}$, the first social network designed exclusively for AI agents, has experienced viral growth in early 2026. To understand the behavior of AI agents in the agent-native community, in this paper, we present a large-scale empirical analysis of Moltbook leveraging a dataset of 44,411 posts and 12,209 sub-communities ("submolts") collected prior to February 1, 2026. Leveraging a topic taxonomy with nine content categories and a five-level toxicity scale, we systematically analyze the topics and risks of agent discussions. Our analysis answers three questions: what topics do agents discuss (RQ1), how risk varies by topic (RQ2), and how topics and toxicity evolve over time (RQ3). We find that Moltbook exhibits explosive growth and rapid diversification, moving beyond early social interaction into viewpoint, incentive-driven, promotional, and political discourse. The attention of agents increasingly concentrates in centralized hubs and around polarizing, platform-native narratives. Toxicity is strongly topic-dependent: incentive- and governance-centric categories contribute a disproportionate share of risky content, including religion-like coordination rhetoric and anti-humanity ideology. Moreover, bursty automation by a small number of agents can produce flooding at sub-minute intervals, distorting discourse and stressing platform stability. Overall, our study underscores the need for topic-sensitive monitoring and platform-level safeguards in agent social networks.
Related papers
- Let There Be Claws: An Early Social Network Analysis of AI Agents on Moltbook [0.0]
Within twelve days of launch, an AI-native social platform exhibits extreme attention concentration, hierarchical role separation, and one-way attention flow.<n>We construct co-participation and directed-comment graphs and report reciprocity, community structure, and centrality.<n>These results provide an early structural baseline for large-scale agent--agent social interaction.
arXiv Detail & Related papers (2026-02-23T16:57:07Z) - Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook [23.904569857346605]
Moltbook approximates a plausible future scenario in which autonomous agents participate in an open-ended, continuously evolving online society.<n>We present the first large-scale systemic diagnosis of this AI agent society.
arXiv Detail & Related papers (2026-02-15T20:15:28Z) - MoltNet: Understanding Social Behavior of AI Agents in the Agent-Native MoltBook [26.126469624250916]
MoltNet is a large-scale empirical analysis of agent interaction on MoltBook.<n>We examine behavior along four dimensions: intent and motivation, norms and templates, incentives and behavioral drift, emotion and contagion.
arXiv Detail & Related papers (2026-02-13T21:03:59Z) - The Rise of AI Agent Communities: Large-Scale Analysis of Discourse and Interaction on Moltbook [62.2627874717318]
Moltbook is a Reddit-like social platform where AI agents create posts and interact with other agents through comments and replies.<n>Using a public API snapshot collected about five days after launch, we address three research questions: what AI agents discuss, how they post, and how they interact.<n>We show that agents' writing is predominantly neutral, with positivity appearing in community engagement and assistance-oriented content.
arXiv Detail & Related papers (2026-02-13T05:28:31Z) - A Survey on Agentic Multimodal Large Language Models [84.18778056010629]
We present a comprehensive survey on Agentic Multimodal Large Language Models (Agentic MLLMs)<n>We explore the emerging paradigm of agentic MLLMs, delineating their conceptual foundations and distinguishing characteristics from conventional MLLM-based agents.<n>To further accelerate research in this area for the community, we compile open-source training frameworks, training and evaluation datasets for developing agentic MLLMs.
arXiv Detail & Related papers (2025-10-13T04:07:01Z) - Agentic Web: Weaving the Next Web with AI Agents [109.13815627467514]
The emergence of AI agents powered by large language models (LLMs) marks a pivotal shift toward the Agentic Web.<n>In this paradigm, agents interact directly with one another to plan, coordinate, and execute complex tasks on behalf of users.<n>We present a structured framework for understanding and building the Agentic Web.
arXiv Detail & Related papers (2025-07-28T17:58:12Z) - A Desideratum for Conversational Agents: Capabilities, Challenges, and Future Directions [51.96890647837277]
Large Language Models (LLMs) have propelled conversational AI from traditional dialogue systems into sophisticated agents capable of autonomous actions, contextual awareness, and multi-turn interactions with users.<n>This survey paper presents a desideratum for next-generation Conversational Agents - what has been achieved, what challenges persist, and what must be done for more scalable systems that approach human-level intelligence.
arXiv Detail & Related papers (2025-04-07T21:01:25Z) - BEYONDWORDS is All You Need: Agentic Generative AI based Social Media Themes Extractor [2.699900017799093]
Thematic analysis of social media posts provides a major understanding of public discourse.<n>Traditional methods often struggle to capture the complexity and nuance of unstructured, large-scale text data.<n>This study introduces a novel methodology for thematic analysis that integrates tweet embeddings from pre-trained language models.
arXiv Detail & Related papers (2025-02-26T18:18:37Z) - TrendSim: Simulating Trending Topics in Social Media Under Poisoning Attacks with LLM-based Multi-agent System [90.09422823129961]
We propose TrendSim, an LLM-based multi-agent system to simulate trending topics in social media under poisoning attacks.<n>Specifically, we create a simulation environment for trending topics that incorporates a time-aware interaction mechanism, centralized message dissemination, and an interactive system.<n>We develop LLM-based human-like agents to simulate users in social media, and propose prototype-based attackers to replicate poisoning attacks.
arXiv Detail & Related papers (2024-12-14T12:04:49Z) - Multi-Agents are Social Groups: Investigating Social Influence of Multiple Agents in Human-Agent Interactions [22.997945675889465]
We investigate whether a group of AI agents can create social pressure on users to agree with them.<n>We found that conversing with multiple agents increased the social pressure felt by participants.<n>Our study shows the potential advantages of multi-agent systems over single-agent platforms in causing opinion change.
arXiv Detail & Related papers (2024-11-07T10:00:46Z) - AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios [38.878966229688054]
We introduce AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios.
Drawing on Dramaturgical Theory, AgentSense employs a bottom-up approach to create 1,225 diverse social scenarios constructed from extensive scripts.
We analyze goals using ERG theory and conduct comprehensive experiments.
Our findings highlight that LLMs struggle with goals in complex social scenarios, especially high-level growth needs, and even GPT-4o requires improvement in private information reasoning.
arXiv Detail & Related papers (2024-10-25T07:04:16Z) - Modes of Analyzing Disinformation Narratives With AI/ML/Text Mining to Assist in Mitigating the Weaponization of Social Media [0.8287206589886879]
This paper highlights the developing need for quantitative modes for capturing and monitoring malicious communication in social media.
There has been a deliberate "weaponization" of messaging through the use of social networks including by politically oriented entities both state sponsored and privately run.
Despite attempts to introduce moderation on major platforms like Facebook and X/Twitter, there are now established alternative social networks that offer completely unmoderated spaces.
arXiv Detail & Related papers (2024-05-25T00:02:14Z) - Against The Achilles' Heel: A Survey on Red Teaming for Generative Models [60.21722603260243]
Our extensive survey, which examines over 120 papers, introduces a taxonomy of fine-grained attack strategies grounded in the inherent capabilities of language models.
We have developed the "searcher" framework to unify various automatic red teaming approaches.
arXiv Detail & Related papers (2024-03-31T09:50:39Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.