AI-Generated Compromises for Coalition Formation
- URL: http://arxiv.org/abs/2506.06837v3
- Date: Sat, 09 Aug 2025 18:00:01 GMT
- Title: AI-Generated Compromises for Coalition Formation
- Authors: Eyal Briman, Ehud Shapiro, Nimrod Talmon,
- Abstract summary: Finding compromises between agent proposals is fundamental to AI subfields such as argumentation, mediation, and negotiation.<n>We formalize a model that incorporates agent bounded rationality and uncertainty, and by developing AI methods to generate compromise proposals.<n>Our approach uses natural language processing techniques and large language models to induce a semantic metric space over text.
- Score: 11.443736581068599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The challenge of finding compromises between agent proposals is fundamental to AI subfields such as argumentation, mediation, and negotiation. Building on this tradition, Elkind et al. (2021) introduced a process for coalition formation that seeks majority-supported proposals preferable to the status quo, using a metric space where each agent has an ideal point. A crucial step in this process involves identifying compromise proposals around which agent coalitions can unite. How to effectively find such compromise proposals remains an open question. We address this gap by formalizing a model that incorporates agent bounded rationality and uncertainty, and by developing AI methods to generate compromise proposals. We focus on the domain of collaborative document writing, such as the democratic drafting of a community constitution. Our approach uses natural language processing techniques and large language models to induce a semantic metric space over text. Based on this space, we design algorithms to suggest compromise points likely to receive broad support. To evaluate our methods, we simulate coalition formation processes and show that AI can facilitate large-scale democratic text editing, a domain where traditional tools are limited.
Related papers
- AI-Generated Compromises for Coalition Formation: Modeling, Simulation, and a Textual Case Study [7.629693370283204]
How to effectively find compromise proposals around which agent coalitions can unite is an open question.<n>We formalize a holistic model that encompasses agent bounded rationality and uncertainty.<n>We apply NLP techniques and utilize LLMs (Large Language Models) to create a semantic metric space for text and develop algorithms to suggest suitable compromise points.
arXiv Detail & Related papers (2025-11-27T13:40:21Z) - Algorithms for Adversarially Robust Deep Learning [58.656107500646364]
We discuss recent progress toward designing algorithms that exhibit desirable robustness properties.<n>We present new algorithms that achieve state-of-the-art generalization in medical imaging, molecular identification, and image classification.<n>We propose new attacks and defenses, which represent the frontier of progress toward designing robust language-based agents.
arXiv Detail & Related papers (2025-09-23T14:48:58Z) - Towards a Common Framework for Autoformalization [2.487142846438629]
Autoformalization has emerged as a term referring to the automation of formalization.<n>The goal of this paper is to review - explicit or implicit - instances of what can be considered autoformalization.
arXiv Detail & Related papers (2025-09-11T19:28:56Z) - Of the People, By the Algorithm: How AI Transforms Democratic Representation [0.0]
AI technologies are transforming democratic representation, focusing on citizen participation and algorithmic decision-making.<n>Social media platforms' AI-driven algorithms currently mediate much political discourse.<n>The emergence of Mass Online Deliberation platforms suggests possibilities for scaling up meaningful citizen participation.<n>Algorithmic Decision-Making systems promise more efficient policy implementation but face limitations in handling complex political trade-offs.
arXiv Detail & Related papers (2025-08-26T13:54:17Z) - Infrastructuring Contestability: A Framework for Community-Defined AI Value Pluralism [0.0]
The proliferation of AI-driven systems presents a challenge to Human-Computer Interaction and Computer-Supported Cooperative Work.<n>Current approaches to value alignment, which rely on centralized, top-down definitions, lack the mechanisms for meaningful contestability.<n>This paper introduces Community-Defined AI Value Pluralism, a socio-technical framework that addresses this gap.
arXiv Detail & Related papers (2025-07-07T16:45:50Z) - Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Conversational Alignment with Artificial Intelligence in Context [0.0]
This article explores what it means for AI agents to be conversationally aligned to human communicative norms and practices.<n>We suggest that current large language model (LLM) architectures, constraints, and affordances may impose fundamental limitations on achieving full conversational alignment.
arXiv Detail & Related papers (2025-05-28T22:14:34Z) - AgentDAO: Synthesis of Proposal Transactions Via Abstract DAO Semantics [5.72453247290246]
We propose a multi-agent system powered by Large Language Models and a Label-Centric Retrieval algorithm to generate governance proposals.<n>The key optimization achieved byLang is a semantic-aware abstraction of user input that reliably secures proposal generation with a low level of token demand.<n>A preliminary evaluation on real-world applications reflects the potential of complicated types of proposals with existing foundation models.
arXiv Detail & Related papers (2025-03-13T06:52:18Z) - Political Neutrality in AI Is Impossible- But Here Is How to Approximate It [97.59456676216115]
We argue that true political neutrality is neither feasible nor universally desirable due to its subjective nature and the biases inherent in AI training data, algorithms, and user interactions.<n>We use the term "approximation" of political neutrality to shift the focus from unattainable absolutes to achievable, practical proxies.
arXiv Detail & Related papers (2025-02-18T16:48:04Z) - The Right to AI [3.2132738637761027]
This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives.<n>We critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight.
arXiv Detail & Related papers (2025-01-29T04:32:41Z) - Participatory Approaches in AI Development and Governance: Case Studies [9.824305892501686]
This paper forms the second of a two-part series on the value of a participatory approach to AI development and deployment.
The first paper had crafted a principled, as well as pragmatic, justification for deploying participatory methods in these two exercises.
This paper will test these preliminary conclusions in two sectors, the use of facial recognition technology in the upkeep of law and order and the use of large language models in the healthcare sector.
arXiv Detail & Related papers (2024-06-03T10:10:23Z) - Modelling Political Coalition Negotiations Using LLM-based Agents [53.934372246390495]
We introduce coalition negotiations as a novel NLP task, and model it as a negotiation between large language model-based agents.
We introduce a multilingual dataset, POLCA, comprising manifestos of European political parties and coalition agreements over a number of elections in these countries.
We propose a hierarchical Markov decision process designed to simulate the process of coalition negotiation between political parties and predict the outcomes.
arXiv Detail & Related papers (2024-02-18T21:28:06Z) - Coherent Entity Disambiguation via Modeling Topic and Categorical
Dependency [87.16283281290053]
Previous entity disambiguation (ED) methods adopt a discriminative paradigm, where prediction is made based on matching scores between mention context and candidate entities.
We propose CoherentED, an ED system equipped with novel designs aimed at enhancing the coherence of entity predictions.
We achieve new state-of-the-art results on popular ED benchmarks, with an average improvement of 1.3 F1 points.
arXiv Detail & Related papers (2023-11-06T16:40:13Z) - A Taxonomy of Decentralized Identifier Methods for Practitioners [50.76687001060655]
A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard.
We propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods.
arXiv Detail & Related papers (2023-10-18T13:01:40Z) - Generative Social Choice [31.99162448662916]
We introduce generative social choice, a design methodology for open-ended democratic processes.<n>We prove that the process representation guarantees when given access to oracle queries.<n>We empirically validate that these queries can be approximately implemented using a large language model.
arXiv Detail & Related papers (2023-09-03T23:47:21Z) - Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement [50.62461749446111]
Self-Polish (SP) is a novel method that facilitates the model's reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable.
SP is to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement.
arXiv Detail & Related papers (2023-05-23T19:58:30Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.