An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies
- URL: http://arxiv.org/abs/2509.12577v1
- Date: Tue, 16 Sep 2025 02:08:11 GMT
- Title: An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies
- Authors: Elinor Poole-Dayan, Deb Roy, Jad Kabbara,
- Abstract summary: We develop methodologies for empirically analyzing transcripts from a tech-enhanced in-person deliberative assembly.<n>We empirically reconstruct each delegate's evolving perspective throughout the assembly.<n>Our methods contribute novel empirical insights into deliberative processes and demonstrate how LLMs can surface high-resolution dynamics.
- Score: 21.30511809806526
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In an era of increasing societal fragmentation, political polarization, and erosion of public trust in institutions, representative deliberative assemblies are emerging as a promising democratic forum for developing effective policy outcomes on complex global issues. Despite theoretical attention, there remains limited empirical work that systematically traces how specific ideas evolve, are prioritized, or are discarded during deliberation to form policy recommendations. Addressing these gaps, this work poses two central questions: (1) How might we trace the evolution and distillation of ideas into concrete recommendations within deliberative assemblies? (2) How does the deliberative process shape delegate perspectives and influence voting dynamics over the course of the assembly? To address these questions, we develop LLM-based methodologies for empirically analyzing transcripts from a tech-enhanced in-person deliberative assembly. The framework identifies and visualizes the space of expressed suggestions. We also empirically reconstruct each delegate's evolving perspective throughout the assembly. Our methods contribute novel empirical insights into deliberative processes and demonstrate how LLMs can surface high-resolution dynamics otherwise invisible in traditional assembly outputs.
Related papers
- AgentCDM: Enhancing Multi-Agent Collaborative Decision-Making via ACH-Inspired Structured Reasoning [8.566904810788213]
AgentCDM is a structured framework for enhancing collaborative decision-making in multi-agent systems.<n>It internalizes cognitive biases and shifts decision-making from passive answer selection to active hypothesis evaluation and construction.<n>Experiments on multiple benchmark datasets demonstrate that AgentCDM achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-08-16T09:46:04Z) - Alignment and Safety in Large Language Models: Safety Mechanisms, Training Paradigms, and Emerging Challenges [47.14342587731284]
This survey provides a comprehensive overview of alignment techniques, training protocols, and empirical findings in large language models (LLMs) alignment.<n>We analyze the development of alignment methods across diverse paradigms, characterizing the fundamental trade-offs between core alignment objectives.<n>We discuss state-of-the-art techniques, including Direct Preference Optimization (DPO), Constitutional AI, brain-inspired methods, and alignment uncertainty quantification (AUQ)
arXiv Detail & Related papers (2025-07-25T20:52:58Z) - Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why [50.191655141020505]
This survey provides a comparative analysis of feature-based and GAN-based approaches to learning from demonstrations.<n>We argue that the dichotomy between feature-based and GAN-based methods is increasingly nuanced.
arXiv Detail & Related papers (2025-07-08T11:45:51Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.<n>With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.<n>We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - AI-Enhanced Deliberative Democracy and the Future of the Collective Will [1.3812010983144802]
We explore AI-based democratic innovations as discovery tools for reasonable representations of a collective will, sense-making, and agreement-seeking.<n>At the same time, we caution against dangerously misguided uses, such as enabling binding decisions, fostering gradual unpackment or post-rationalizing political outcomes.
arXiv Detail & Related papers (2025-03-06T00:06:22Z) - Bridging Voting and Deliberation with Algorithms: Field Insights from vTaiwan and Kultur Komitee [1.2277343096128712]
Democratic processes increasingly aim to integrate large-scale voting with face-to-face deliberation.<n>This work introduces new methods that use algorithms and computational tools to bridge online voting with face-to-face deliberation.
arXiv Detail & Related papers (2025-02-07T15:45:13Z) - Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis [48.14390493099495]
How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - AI-Augmented Brainwriting: Investigating the use of LLMs in group
ideation [11.503226612030316]
generative AI technologies such as large language models (LLMs) have significant implications for creative work.
This paper explores two aspects of integrating LLMs into the creative process - the divergence stage of idea generation, and the convergence stage of evaluation and selection of ideas.
We devised a collaborative group-AI Brainwriting ideation framework, which incorporated an LLM as an enhancement into the group ideation process.
arXiv Detail & Related papers (2024-02-22T21:34:52Z) - The Empty Signifier Problem: Towards Clearer Paradigms for
Operationalising "Alignment" in Large Language Models [18.16062736448993]
We address the concept of "alignment" in large language models (LLMs) through the lens of post-structuralist socio-political theory.
We propose a framework that demarcates: 1) which dimensions of model behaviour are considered important, then 2) how meanings and definitions are ascribed to these dimensions.
We aim to foster a culture of transparency and critical evaluation, aiding the community in navigating the complexities of aligning LLMs with human populations.
arXiv Detail & Related papers (2023-10-03T22:02:17Z) - An attention model for the formation of collectives in real-world
domains [78.1526027174326]
We consider the problem of forming collectives of agents for real-world applications aligned with Sustainable Development Goals.
We propose a general approach for the formation of collectives based on a novel combination of an attention model and an integer linear program.
arXiv Detail & Related papers (2022-04-30T09:15:36Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.