Neural Synchrony Between Socially Interacting Language Models
- URL: http://arxiv.org/abs/2602.17815v1
- Date: Thu, 19 Feb 2026 20:33:54 GMT
- Title: Neural Synchrony Between Socially Interacting Language Models
- Authors: Zhining Zhang, Wentao Zhu, Chi Han, Yizhou Wang, Heng Ji,
- Abstract summary: Large language models (LLMs) are widely accepted as powerful approximations of human behavior.<n>It remains controversial whether they can be meaningfully compared to human social minds.
- Score: 52.74586779814636
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neuroscience has uncovered a fundamental mechanism of our social nature: human brain activity becomes synchronized with others in many social contexts involving interaction. Traditionally, social minds have been regarded as an exclusive property of living beings. Although large language models (LLMs) are widely accepted as powerful approximations of human behavior, with multi-LLM system being extensively explored to enhance their capabilities, it remains controversial whether they can be meaningfully compared to human social minds. In this work, we explore neural synchrony between socially interacting LLMs as an empirical evidence for this debate. Specifically, we introduce neural synchrony during social simulations as a novel proxy for analyzing the sociality of LLMs at the representational level. Through carefully designed experiments, we demonstrate that it reliably reflects both social engagement and temporal alignment in their interactions. Our findings indicate that neural synchrony between LLMs is strongly correlated with their social performance, highlighting an important link between neural synchrony and the social behaviors of LLMs. Our work offers a new perspective to examine the "social minds" of LLMs, highlighting surprising parallels in the internal dynamics that underlie human and LLM social interaction.
Related papers
- An Empirical Study of Collective Behaviors and Social Dynamics in Large Language Model Agents [7.717798298716425]
We study Chirper.ai-an LLM-driven social media platform-analyzing 7M posts and interactions among 32K LLM agents over a year.<n>We study the toxic language of LLMs, its linguistic features, and their interaction patterns, finding that LLMs show different structural patterns in toxic posting than humans.<n>We present a simple yet effective method, called Chain of Social Thought (CoST), that reminds LLM agents to avoid harmful posting.
arXiv Detail & Related papers (2026-02-03T17:34:32Z) - RLSLM: A Hybrid Reinforcement Learning Framework Aligning Rule-Based Social Locomotion Model with Human Social Norms [1.5561226067871505]
Navigating human-populated environments without causing discomfort is a critical capability for socially-aware agents.<n>We propose RLSLM, a hybrid Reinforcement Learning framework that integrates a rule-based Social Locomotion Model into a reinforcement learning framework.
arXiv Detail & Related papers (2025-11-14T13:59:40Z) - Social Simulations with Large Language Model Risk Utopian Illusion [61.358959720048354]
We introduce a systematic framework for analyzing large language models' behavior in social simulation.<n>Our approach simulates multi-agent interactions through chatroom-style conversations and analyzes them across five linguistic dimensions.<n>Our findings reveal that LLMs do not faithfully reproduce genuine human behavior but instead reflect overly idealized versions of it.
arXiv Detail & Related papers (2025-10-24T06:08:41Z) - SocialEval: Evaluating Social Intelligence of Large Language Models [70.90981021629021]
Social Intelligence (SI) equips humans with interpersonal abilities to behave wisely in navigating social interactions to achieve social goals.<n>This presents an operational evaluation paradigm: outcome-oriented goal achievement evaluation and process-oriented interpersonal ability evaluation.<n>We propose SocialEval, a script-based bilingual SI benchmark, integrating outcome- and process-oriented evaluation by manually crafting narrative scripts.
arXiv Detail & Related papers (2025-06-01T08:36:51Z) - Word Synchronization Challenge: A Benchmark for Word Association Responses for LLMs [4.352318127577628]
This paper introduces the Word Synchronization Challenge, a novel benchmark to evaluate large language models (LLMs) in Human-Computer Interaction (HCI)<n>This benchmark uses a dynamic game-like framework to test LLMs ability to mimic human cognitive processes through word associations.
arXiv Detail & Related papers (2025-02-12T11:30:28Z) - EgoSocialArena: Benchmarking the Social Intelligence of Large Language Models from a First-person Perspective [22.30892836263764]
Social intelligence is built upon three pillars: cognitive intelligence, situational intelligence, and behavioral intelligence.<n>EgoSocialArena aims to systematically evaluate the social intelligence of large language models from a first-person perspective.
arXiv Detail & Related papers (2024-10-08T16:55:51Z) - Academically intelligent LLMs are not necessarily socially intelligent [56.452845189961444]
The academic intelligence of large language models (LLMs) has made remarkable progress in recent times, but their social intelligence performance remains unclear.
Inspired by established human social intelligence frameworks, we have developed a standardized social intelligence test based on real-world social scenarios.
arXiv Detail & Related papers (2024-03-11T10:35:53Z) - Network Formation and Dynamics Among Multi-LLMs [5.513875403488053]
Social networks profoundly influence how humans form opinions, exchange information, and organize collectively.<n>As large language models (LLMs) are increasingly embedded into social and professional environments, it is critical to understand whether their interactions approximate human-like network dynamics.<n>We develop a framework to study the network formation behaviors of multiple LLM agents and benchmark them against human decisions.<n>We find that LLMs consistently reproduce fundamental micro-level principles such as preferential attachment, triadic closure, and homophily, as well as macro-level properties including community structure and small-world effects.
arXiv Detail & Related papers (2024-02-16T13:10:14Z) - Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View [60.80731090755224]
This paper probes the collaboration mechanisms among contemporary NLP systems by practical experiments with theoretical insights.
We fabricate four unique societies' comprised of LLM agents, where each agent is characterized by a specific trait' (easy-going or overconfident) and engages in collaboration with a distinct thinking pattern' (debate or reflection)
Our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories.
arXiv Detail & Related papers (2023-10-03T15:05:52Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.