Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
- URL: http://arxiv.org/abs/2409.17213v4
- Date: Fri, 1 Nov 2024 02:08:03 GMT
- Title: Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
- Authors: Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, Ceren Budak,
- Abstract summary: We introduce Plurals, a system and Python library for pluralistic AI deliberation.
plurals consists of Agents which deliberate within customizable Structures, with Moderators overseeing deliberation.
Six case studies demonstrate fidelity to theoretical constructs and efficacy.
- Score: 1.9034114150823245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent debates raised concerns that language models may favor certain viewpoints. But what if the solution is not to aim for a 'view from nowhere' but rather to leverage different viewpoints? We introduce Plurals, a system and Python library for pluralistic AI deliberation. Plurals consists of Agents (LLMs, optionally with personas) which deliberate within customizable Structures, with Moderators overseeing deliberation. Plurals is a generator of simulated social ensembles. Plurals integrates with government datasets to create nationally representative personas, includes deliberation templates inspired by democratic deliberation theory, and allows users to customize both information-sharing structures and deliberation behavior within Structures. Six case studies demonstrate fidelity to theoretical constructs and efficacy. Three randomized experiments show simulated focus groups produced output resonant with an online sample of the relevant audiences (chosen over zero-shot generation in 75% of trials). Plurals is both a paradigm and a concrete system for pluralistic AI. The Plurals library is available at https://github.com/josh-ashkinaze/plurals and will be continually updated.
Related papers
- Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration [84.47037877922293]
Large language models (LLMs) struggle to model diverse preferences across cultures, demographics, and communities.
We propose Modular Pluralism, a modular framework based on multi-LLM collaboration for pluralistic alignment.
We evaluate Modular Pluralism with six tasks and four datasets featuring questions/instructions with value-laden and perspective-informed responses.
arXiv Detail & Related papers (2024-06-22T22:07:40Z) - A Roadmap to Pluralistic Alignment [49.29107308098236]
We propose a roadmap to pluralistic alignment, specifically using language models as a test bed.
We identify and formalize three possible ways to define and operationalize pluralism in AI systems.
We argue that current alignment techniques may be fundamentally limited for pluralistic AI.
arXiv Detail & Related papers (2024-02-07T18:21:17Z) - SADAS: A Dialogue Assistant System Towards Remediating Norm Violations
in Bilingual Socio-Cultural Conversations [56.31816995795216]
Socially-Aware Dialogue Assistant System (SADAS) is designed to ensure that conversations unfold with respect and understanding.
Our system's novel architecture includes: (1) identifying the categories of norms present in the dialogue, (2) detecting potential norm violations, (3) evaluating the severity of these violations, and (4) implementing targeted remedies to rectify the breaches.
arXiv Detail & Related papers (2024-01-29T08:54:21Z) - Making sense of spoken plurals [1.80476943513092]
This study focuses on the semantics of noun singulars and their plural inflectional variants in English.
One model (FRACSS) proposes that all singular-plural pairs should be taken into account when predicting plural semantics from singular semantics.
The other model (CCA) argues that conceptualization for plurality depends primarily on the semantic class of the base word.
arXiv Detail & Related papers (2022-07-05T10:44:26Z) - Structured, flexible, and robust: benchmarking and improving large
language models towards more human-like behavior in out-of-distribution
reasoning tasks [39.39138995087475]
We ask how much of human-like thinking can be captured by learning statistical patterns in language alone.
Our benchmark contains two problem-solving domains (planning and explanation generation) and is designed to require generalization.
We find that humans are far more robust than LLMs on this benchmark.
arXiv Detail & Related papers (2022-05-11T18:14:33Z) - Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language [49.82293730925404]
Large foundation models can exhibit unique capabilities depending on the domain of data they are trained on.
We show that this model diversity is symbiotic, and can be leveraged to build AI systems with structured Socratic dialogue.
arXiv Detail & Related papers (2022-04-01T17:43:13Z) - Read Like Humans: Autonomous, Bidirectional and Iterative Language
Modeling for Scene Text Recognition [80.446770909975]
Linguistic knowledge is of great benefit to scene text recognition.
How to effectively model linguistic rules in end-to-end deep networks remains a research challenge.
We propose an autonomous, bidirectional and iterative ABINet for scene text recognition.
arXiv Detail & Related papers (2021-03-11T06:47:45Z) - Vokenization: Improving Language Understanding with Contextualized,
Visual-Grounded Supervision [110.66085917826648]
We develop a technique that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images.
"vokenization" is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora.
Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple pure-language tasks.
arXiv Detail & Related papers (2020-10-14T02:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.