Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
- URL: http://arxiv.org/abs/2409.17213v5
- Date: Tue, 19 Nov 2024 15:37:57 GMT
- Title: Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
- Authors: Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, Ceren Budak,
- Abstract summary: We introduce Plurals, a system and Python library for pluralistic AI deliberation.
plurals consists of Agents which deliberate within customizable Structures, with Moderators overseeing deliberation.
Six case studies demonstrate fidelity to theoretical constructs and efficacy.
- Score: 1.9034114150823245
- License:
- Abstract: Recent debates raised concerns that language models may favor certain viewpoints. But what if the solution is not to aim for a 'view from nowhere' but rather to leverage different viewpoints? We introduce Plurals, a system and Python library for pluralistic AI deliberation. Plurals consists of Agents (LLMs, optionally with personas) which deliberate within customizable Structures, with Moderators overseeing deliberation. Plurals is a generator of simulated social ensembles. Plurals integrates with government datasets to create nationally representative personas, includes deliberation templates inspired by deliberative democracy, and allows users to customize both information-sharing structures and deliberation behavior within Structures. Six case studies demonstrate fidelity to theoretical constructs and efficacy. Three randomized experiments show simulated focus groups produced output resonant with an online sample of the relevant audiences (chosen over zero-shot generation in 75% of trials). Plurals is both a paradigm and a concrete system for pluralistic AI. The Plurals library is available at https://github.com/josh-ashkinaze/plurals and will be continually updated.
Related papers
- Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration [84.47037877922293]
Large language models (LLMs) struggle to model diverse preferences across cultures, demographics, and communities.
We propose Modular Pluralism, a modular framework based on multi-LLM collaboration for pluralistic alignment.
We evaluate Modular Pluralism with six tasks and four datasets featuring questions/instructions with value-laden and perspective-informed responses.
arXiv Detail & Related papers (2024-06-22T22:07:40Z) - A Roadmap to Pluralistic Alignment [49.29107308098236]
We propose a roadmap to pluralistic alignment, specifically using language models as a test bed.
We identify and formalize three possible ways to define and operationalize pluralism in AI systems.
We argue that current alignment techniques may be fundamentally limited for pluralistic AI.
arXiv Detail & Related papers (2024-02-07T18:21:17Z) - SADAS: A Dialogue Assistant System Towards Remediating Norm Violations
in Bilingual Socio-Cultural Conversations [56.31816995795216]
Socially-Aware Dialogue Assistant System (SADAS) is designed to ensure that conversations unfold with respect and understanding.
Our system's novel architecture includes: (1) identifying the categories of norms present in the dialogue, (2) detecting potential norm violations, (3) evaluating the severity of these violations, and (4) implementing targeted remedies to rectify the breaches.
arXiv Detail & Related papers (2024-01-29T08:54:21Z) - Generative Social Choice [30.23505343152816]
We introduce generative social choice, a framework that combines the rigor of social choice theory with the capability of large language models to generate text and extrapolate preferences.
We apply this framework to the problem of generating a slate of statements that is representative of opinions expressed as free-form text.
We find that 93 out of 100 participants feel "mostly" or "perfectly" represented by the slate of five statements we extracted.
arXiv Detail & Related papers (2023-09-03T23:47:21Z) - Vector Representations of Idioms in Conversational Systems [1.6507910904669727]
We utilize the Potentialatic Expression (PIE)-English idioms corpus for the two tasks that we investigate.
We achieve state-of-the-art (SoTA) result of 98% macro F1 score on the classification task by using the SoTA T5 model.
The results show that the model trained on the idiom corpus generates more fitting responses to prompts containing idioms 71.9% of the time.
arXiv Detail & Related papers (2022-05-07T14:50:05Z) - Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language [49.82293730925404]
Large foundation models can exhibit unique capabilities depending on the domain of data they are trained on.
We show that this model diversity is symbiotic, and can be leveraged to build AI systems with structured Socratic dialogue.
arXiv Detail & Related papers (2022-04-01T17:43:13Z) - Read Like Humans: Autonomous, Bidirectional and Iterative Language
Modeling for Scene Text Recognition [80.446770909975]
Linguistic knowledge is of great benefit to scene text recognition.
How to effectively model linguistic rules in end-to-end deep networks remains a research challenge.
We propose an autonomous, bidirectional and iterative ABINet for scene text recognition.
arXiv Detail & Related papers (2021-03-11T06:47:45Z) - Vokenization: Improving Language Understanding with Contextualized,
Visual-Grounded Supervision [110.66085917826648]
We develop a technique that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images.
"vokenization" is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora.
Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple pure-language tasks.
arXiv Detail & Related papers (2020-10-14T02:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.