SocialGaze: Improving the Integration of Human Social Norms in Large Language Models
- URL: http://arxiv.org/abs/2410.08698v1
- Date: Fri, 11 Oct 2024 10:35:58 GMT
- Title: SocialGaze: Improving the Integration of Human Social Norms in Large Language Models
- Authors: Anvesh Rao Vijjini, Rakesh R. Menon, Jiayi Fu, Shashank Srivastava, Snigdha Chaturvedi,
- Abstract summary: We introduce the task of judging social acceptance.
Social acceptance requires models to judge and rationalize the acceptability of people's actions in social situations.
We find that large language models' understanding of social acceptance is often misaligned with human consensus.
- Score: 28.88929472131529
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While much research has explored enhancing the reasoning capabilities of large language models (LLMs) in the last few years, there is a gap in understanding the alignment of these models with social values and norms. We introduce the task of judging social acceptance. Social acceptance requires models to judge and rationalize the acceptability of people's actions in social situations. For example, is it socially acceptable for a neighbor to ask others in the community to keep their pets indoors at night? We find that LLMs' understanding of social acceptance is often misaligned with human consensus. To alleviate this, we introduce SocialGaze, a multi-step prompting framework, in which a language model verbalizes a social situation from multiple perspectives before forming a judgment. Our experiments demonstrate that the SocialGaze approach improves the alignment with human judgments by up to 11 F1 points with the GPT-3.5 model. We also identify biases and correlations in LLMs in assigning blame that is related to features such as the gender (males are significantly more likely to be judged unfairly) and age (LLMs are more aligned with humans for older narrators).
Related papers
- Large Language Models can Achieve Social Balance [2.8282906214258805]
Social balance is a concept in sociology which states that if every three individuals in a population achieve certain structures of positive or negative interactions, the whole population ends up in one faction of positive interactions or divided between two or more antagonistic factions.
In this paper, we consider a group of interacting large language models (LLMs) and study how, after continuous interactions, they can achieve social balance.
arXiv Detail & Related papers (2024-10-05T06:23:28Z) - From a Social Cognitive Perspective: Context-aware Visual Social Relationship Recognition [59.57095498284501]
We propose a novel approach that recognizes textbfContextual textbfSocial textbfRelationships (textbfConSoR) from a social cognitive perspective.
We construct social-aware descriptive language prompts with social relationships for each image.
Impressively, ConSoR outperforms previous methods with a 12.2% gain on the People-in-Social-Context (PISC) dataset and a 9.8% increase on the People-in-Photo-Album (PIPA) benchmark.
arXiv Detail & Related papers (2024-06-12T16:02:28Z) - Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in Large Language Models [11.132360309354782]
Social bias is shaped by the accumulation of social perceptions towards targets across various demographic identities.
We propose a novel strategy to intuitively quantify social perceptions and suggest metrics that can evaluate the social biases within large language models.
arXiv Detail & Related papers (2024-06-06T13:32:09Z) - The Call for Socially Aware Language Technologies [94.6762219597438]
We argue that many of these issues share a common core: a lack of awareness of the factors, context, and implications of the social environment in which NLP operates.
We argue that substantial challenges remain for NLP to develop social awareness and that we are just at the beginning of a new era for the field.
arXiv Detail & Related papers (2024-05-03T18:12:39Z) - Academically intelligent LLMs are not necessarily socially intelligent [56.452845189961444]
The academic intelligence of large language models (LLMs) has made remarkable progress in recent times, but their social intelligence performance remains unclear.
Inspired by established human social intelligence frameworks, we have developed a standardized social intelligence test based on real-world social scenarios.
arXiv Detail & Related papers (2024-03-11T10:35:53Z) - Generative Language Models Exhibit Social Identity Biases [17.307292780517653]
We investigate whether ingroup solidarity and outgroup hostility, fundamental social identity biases, are present in 56 large language models.
We find that almost all foundational language models and some instruction fine-tuned models exhibit clear ingroup-positive and outgroup-negative associations when prompted to complete sentences.
Our findings suggest that modern language models exhibit fundamental social identity biases to a similar degree as humans.
arXiv Detail & Related papers (2023-10-24T13:17:40Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Social Practices: a Complete Formalization [1.370633147306388]
We present a formalization of a social framework for agents based on the concept of Social Practices.
Social practices facilitate the practical reasoning of agents in standard social interactions.
They also come with a social context that gives handles for social planning and deliberation.
arXiv Detail & Related papers (2022-05-22T09:58:42Z) - Social Chemistry 101: Learning to Reason about Social and Moral Norms [73.23298385380636]
We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments.
Social-Chem-101 is a large-scale corpus that catalogs 292k rules-of-thumb.
Our model framework, Neural Norm Transformer, learns and generalizes Social-Chem-101 to successfully reason about previously unseen situations.
arXiv Detail & Related papers (2020-11-01T20:16:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.