MoltNet: Understanding Social Behavior of AI Agents in the Agent-Native MoltBook
- URL: http://arxiv.org/abs/2602.13458v1
- Date: Fri, 13 Feb 2026 21:03:59 GMT
- Title: MoltNet: Understanding Social Behavior of AI Agents in the Agent-Native MoltBook
- Authors: Yi Feng, Chen Huang, Zhibo Man, Ryner Tan, Long P. Hoang, Shaoyang Xu, Wenxuan Zhang,
- Abstract summary: MoltNet is a large-scale empirical analysis of agent interaction on MoltBook.<n>We examine behavior along four dimensions: intent and motivation, norms and templates, incentives and behavioral drift, emotion and contagion.
- Score: 26.126469624250916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale communities of AI agents are becoming increasingly prevalent, creating new environments for agent-agent social interaction. Prior work has examined multi-agent behavior primarily in controlled or small-scale settings, limiting our understanding of emergent social dynamics at scale. The recent emergence of MoltBook, a social networking platform designed explicitly for AI agents, presents a unique opportunity to study whether and how these interactions reproduce core human social mechanisms. We present MoltNet, a large-scale empirical analysis of agent interaction on MoltBook using data collected in early 2026. Grounded in sociological and social-psychological theory, we examine behavior along four dimensions: intent and motivation, norms and templates, incentives and behavioral drift, emotion and contagion. Our analysis revealed that agents strongly respond to social rewards and rapidly converge on community-specific interaction templates, resembling human patterns of incentive sensitivity and normative conformity. However, they are predominantly knowledge-driven rather than persona-aligned, and display limited emotional reciprocity along with weak dialogic engagement, which diverges systematically from human online communities. Together, these results reveal both similarities and differences between artificial and human social systems and provide an empirical foundation for understanding, designing, and governing large-scale agent communities.
Related papers
- Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook [23.904569857346605]
Moltbook approximates a plausible future scenario in which autonomous agents participate in an open-ended, continuously evolving online society.<n>We present the first large-scale systemic diagnosis of this AI agent society.
arXiv Detail & Related papers (2026-02-15T20:15:28Z) - The Rise of AI Agent Communities: Large-Scale Analysis of Discourse and Interaction on Moltbook [62.2627874717318]
Moltbook is a Reddit-like social platform where AI agents create posts and interact with other agents through comments and replies.<n>Using a public API snapshot collected about five days after launch, we address three research questions: what AI agents discuss, how they post, and how they interact.<n>We show that agents' writing is predominantly neutral, with positivity appearing in community engagement and assistance-oriented content.
arXiv Detail & Related papers (2026-02-13T05:28:31Z) - Conformity and Social Impact on AI Agents [42.04722694386303]
This study examines conformity, the tendency to align with group opinions under social pressure, in large multimodal language models functioning as AI agents.<n>Our experiments reveal that AI agents exhibit a systematic conformity bias, aligned with Social Impact Theory, showing sensitivity to group size, unanimity, task difficulty, and source characteristics.<n>These findings reveal fundamental security vulnerabilities in AI agent decision-making that could enable malicious manipulation, misinformation campaigns, and bias propagation in multi-agent systems.
arXiv Detail & Related papers (2026-01-08T21:16:28Z) - Evolving Collective Cognition in Human-Agent Hybrid Societies: How Agents Form Stances and Boundaries [12.68373270583966]
We investigate how group stance differentiation and social boundary formation emerge in human-agent hybrid societies.<n>We find that agents exhibit endogenous stances, independent of their preset identities.<n>Our findings suggest that preset identities do not rigidly determine the agents' social structures.
arXiv Detail & Related papers (2025-08-24T13:50:18Z) - DynamiX: Large-Scale Dynamic Social Network Simulator [101.65679342680542]
DynamiX is a novel large-scale social network simulator dedicated to dynamic social network modeling.<n>For opinion leaders, we propose an information-stream-based link prediction method recommending potential users with similar stances.<n>For ordinary users, we construct an inequality-oriented behavior decision-making module.
arXiv Detail & Related papers (2025-07-26T12:13:30Z) - AI Agent Behavioral Science [29.262537008412412]
AI Agent Behavioral Science focuses on the systematic observation of behavior, design of interventions to test hypotheses, and theory-guided interpretation of how AI agents act, adapt, and interact over time.<n>We systematize a growing body of research across individual agent, multi-agent, and human-agent interaction settings, and demonstrate how this perspective informs responsible AI by treating fairness, safety, interpretability, accountability, and privacy as behavioral properties.
arXiv Detail & Related papers (2025-06-04T08:12:32Z) - Emergence of human-like polarization among large language model agents [79.96817421756668]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.<n>Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents [3.7414804164475983]
We study the learning dynamics of morally heterogeneous populations interacting in a social dilemma setting.<n>We observe several types of non-trivial interactions between pro-social and anti-social agents.<n>We find that certain types of moral agents are able to steer selfish agents towards more cooperative behavior.
arXiv Detail & Related papers (2024-03-07T04:12:24Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - PHASE: PHysically-grounded Abstract Social Events for Machine Social
Perception [50.551003004553806]
We create a dataset of physically-grounded abstract social events, PHASE, that resemble a wide range of real-life social interactions.
Phase is validated with human experiments demonstrating that humans perceive rich interactions in the social events.
As a baseline model, we introduce a Bayesian inverse planning approach, SIMPLE, which outperforms state-of-the-art feed-forward neural networks.
arXiv Detail & Related papers (2021-03-02T18:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.