Academically intelligent LLMs are not necessarily socially intelligent
- URL: http://arxiv.org/abs/2403.06591v1
- Date: Mon, 11 Mar 2024 10:35:53 GMT
- Title: Academically intelligent LLMs are not necessarily socially intelligent
- Authors: Ruoxi Xu, Hongyu Lin, Xianpei Han, Le Sun, Yingfei Sun
- Abstract summary: The academic intelligence of large language models (LLMs) has made remarkable progress in recent times, but their social intelligence performance remains unclear.
Inspired by established human social intelligence frameworks, we have developed a standardized social intelligence test based on real-world social scenarios.
- Score: 56.452845189961444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The academic intelligence of large language models (LLMs) has made remarkable
progress in recent times, but their social intelligence performance remains
unclear. Inspired by established human social intelligence frameworks,
particularly Daniel Goleman's social intelligence theory, we have developed a
standardized social intelligence test based on real-world social scenarios to
comprehensively assess the social intelligence of LLMs, termed as the
Situational Evaluation of Social Intelligence (SESI). We conducted an extensive
evaluation with 13 recent popular and state-of-art LLM agents on SESI. The
results indicate the social intelligence of LLMs still has significant room for
improvement, with superficially friendliness as a primary reason for errors.
Moreover, there exists a relatively low correlation between the social
intelligence and academic intelligence exhibited by LLMs, suggesting that
social intelligence is distinct from academic intelligence for LLMs.
Additionally, while it is observed that LLMs can't ``understand'' what social
intelligence is, their social intelligence, similar to that of humans, is
influenced by social factors.
Related papers
- Entering Real Social World! Benchmarking the Theory of Mind and Socialization Capabilities of LLMs from a First-person Perspective [22.30892836263764]
In the era of artificial intelligence (AI), especially with the development of large language models (LLMs), we raise an intriguing question.
How do LLMs perform in terms of ToM and socialization capabilities?
We introduce EgoSocialArena, a novel framework designed to evaluate and investigate the ToM and socialization capabilities of LLMs from a first person perspective.
arXiv Detail & Related papers (2024-10-08T16:55:51Z) - InterIntent: Investigating Social Intelligence of LLMs via Intention Understanding in an Interactive Game Context [27.740204336800687]
Large language models (LLMs) have demonstrated the potential to mimic human social intelligence.
We develop a novel framework, InterIntent, to assess LLMs' social intelligence by mapping their ability to understand and manage intentions in a game setting.
arXiv Detail & Related papers (2024-06-18T02:02:15Z) - Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View [21.341128731357415]
Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias.
We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence.
arXiv Detail & Related papers (2024-05-23T16:13:33Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - SOTOPIA-$π$: Interactive Learning of Socially Intelligent Language Agents [73.35393511272791]
We propose an interactive learning method, SOTOPIA-$pi$, improving the social intelligence of language agents.
This method leverages behavior cloning and self-reinforcement training on filtered social interaction data according to large language model (LLM) ratings.
arXiv Detail & Related papers (2024-03-13T17:17:48Z) - DeSIQ: Towards an Unbiased, Challenging Benchmark for Social
Intelligence Understanding [60.84356161106069]
We study the soundness of Social-IQ, a dataset of multiple-choice questions on videos of complex social interactions.
Our analysis reveals that Social-IQ contains substantial biases, which can be exploited by a moderately strong language model.
We introduce DeSIQ, a new challenging dataset, constructed by applying simple perturbations to Social-IQ.
arXiv Detail & Related papers (2023-10-24T06:21:34Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Influence of External Information on Large Language Models Mirrors
Social Cognitive Patterns [51.622612759892775]
Social cognitive theory explains how people learn and acquire knowledge through observing others.
Recent years have witnessed the rapid development of large language models (LLMs)
LLMs, as AI agents, can observe external information, which shapes their cognition and behaviors.
arXiv Detail & Related papers (2023-05-08T16:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.