SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models
- URL: http://arxiv.org/abs/2505.23713v1
- Date: Thu, 29 May 2025 17:47:36 GMT
- Title: SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models
- Authors: Zixiang Xu, Yanbo Wang, Yue Huang, Jiayi Ye, Haomin Zhuang, Zirui Song, Lang Gao, Chenxi Wang, Zhaorun Chen, Yujun Zhou, Sixian Li, Wang Pan, Yue Zhao, Jieyu Zhao, Xiangliang Zhang, Xiuying Chen,
- Abstract summary: We introduce SocialMaze, a new benchmark specifically designed to evaluate social reasoning.<n>SocialMaze systematically incorporates three core challenges: deep reasoning, dynamic interaction, and information uncertainty.<n>It provides six diverse tasks across three key settings: social reasoning games, daily-life interactions, and digital community platforms.
- Score: 41.68365456601248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are increasingly applied to socially grounded tasks, such as online community moderation, media content analysis, and social reasoning games. Success in these contexts depends on a model's social reasoning ability - the capacity to interpret social contexts, infer others' mental states, and assess the truthfulness of presented information. However, there is currently no systematic evaluation framework that comprehensively assesses the social reasoning capabilities of LLMs. Existing efforts often oversimplify real-world scenarios and consist of tasks that are too basic to challenge advanced models. To address this gap, we introduce SocialMaze, a new benchmark specifically designed to evaluate social reasoning. SocialMaze systematically incorporates three core challenges: deep reasoning, dynamic interaction, and information uncertainty. It provides six diverse tasks across three key settings: social reasoning games, daily-life interactions, and digital community platforms. Both automated and human validation are used to ensure data quality. Our evaluation reveals several key insights: models vary substantially in their ability to handle dynamic interactions and integrate temporally evolving information; models with strong chain-of-thought reasoning perform better on tasks requiring deeper inference beyond surface-level cues; and model reasoning degrades significantly under uncertainty. Furthermore, we show that targeted fine-tuning on curated reasoning examples can greatly improve model performance in complex social scenarios. The dataset is publicly available at: https://huggingface.co/datasets/MBZUAI/SocialMaze
Related papers
- R^3-VQA: "Read the Room" by Video Social Reasoning [26.694917467429207]
"Read the room" is a significant social reasoning capability in human daily life.<n>We contribute a valuable, high-quality, and comprehensive video dataset named R3-VQA.
arXiv Detail & Related papers (2025-05-07T05:55:45Z) - Social Genome: Grounded Social Reasoning Abilities of Multimodal Models [61.88413918026431]
Social Genome is the first benchmark for fine-grained, grounded social reasoning abilities of multimodal models.<n>It contains 272 videos of interactions and 1,486 human-annotated reasoning traces related to inferences about these interactions.<n>Social Genome is also the first modeling challenge to study external knowledge in social reasoning.
arXiv Detail & Related papers (2025-02-21T00:05:40Z) - SoMeLVLM: A Large Vision Language Model for Social Media Processing [78.47310657638567]
We introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM)
SoMeLVLM is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation.
Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks.
arXiv Detail & Related papers (2024-02-20T14:02:45Z) - Social-LLM: Modeling User Behavior at Scale using Language Models and
Social Network Data [13.660150473547766]
We introduce a novel approach tailored for modeling social network data in user detection tasks.
Our method integrates localized social network interactions with the capabilities of large language models.
We conduct a thorough evaluation of our method across seven real-world social network datasets.
arXiv Detail & Related papers (2023-12-31T05:13:13Z) - DeSIQ: Towards an Unbiased, Challenging Benchmark for Social
Intelligence Understanding [60.84356161106069]
We study the soundness of Social-IQ, a dataset of multiple-choice questions on videos of complex social interactions.
Our analysis reveals that Social-IQ contains substantial biases, which can be exploited by a moderately strong language model.
We introduce DeSIQ, a new challenging dataset, constructed by applying simple perturbations to Social-IQ.
arXiv Detail & Related papers (2023-10-24T06:21:34Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Social Processes: Self-Supervised Forecasting of Nonverbal Cues in
Social Conversations [22.302509912465077]
We take the first step in the direction of a bottom-up self-supervised approach in the domain of social human interactions.
We formulate the task of Social Cue Forecasting to leverage the larger amount of unlabeled low-level behavior cues.
We propose the Social Process (SP) models--socially aware sequence-to-sequence (Seq2Seq) models within the Neural Process (NP) family.
arXiv Detail & Related papers (2021-07-28T18:01:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.