Negotiative Alignment: Embracing Disagreement to Achieve Fairer Outcomes -- Insights from Urban Studies
- URL: http://arxiv.org/abs/2503.12613v1
- Date: Sun, 16 Mar 2025 18:55:54 GMT
- Title: Negotiative Alignment: Embracing Disagreement to Achieve Fairer Outcomes -- Insights from Urban Studies
- Authors: Rashid Mushkani, Hugo Berard, Shin Koseki,
- Abstract summary: We present findings from a community-centered study in Montreal involving 35 residents with diverse demographic and social identities.<n>We propose negotiative alignment, an AI framework that treats disagreement as an essential input to be preserved, analyzed, and addressed.
- Score: 3.510270856154939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cities are not monolithic; they are arenas of negotiation among groups that hold varying needs, values, and experiences. Conventional methods of urban assessment -- from standardized surveys to AI-driven evaluations -- frequently rely on a single consensus metric (e.g., an average measure of inclusivity or safety). Although such aggregations simplify design decisions, they risk obscuring the distinct perspectives of marginalized populations. In this paper, we present findings from a community-centered study in Montreal involving 35 residents with diverse demographic and social identities, particularly wheelchair users, seniors, and LGBTQIA2+ individuals. Using rating and ranking tasks on 20 urban sites, we observe that disagreements are systematic rather than random, reflecting structural inequalities, differing cultural values, and personal experiences of safety and accessibility. Based on these empirical insights, we propose negotiative alignment, an AI framework that treats disagreement as an essential input to be preserved, analyzed, and addressed. Negotiative alignment builds on pluralistic models by dynamically updating stakeholder preferences through multi-agent negotiation mechanisms, ensuring no single perspective is marginalized. We outline how this framework can be integrated into urban analytics -- and other decision-making contexts -- to retain minority viewpoints, adapt to changing stakeholder concerns, and enhance fairness and accountability. The study demonstrates that preserving and engaging with disagreement, rather than striving for an artificial consensus, can produce more equitable and responsive AI-driven outcomes in urban design.
Related papers
- Urban Safety Perception Through the Lens of Large Multimodal Models: A Persona-based Approach [4.315451628809687]
This study introduces Large Multimodal Models (LMMs), specifically Llava 1.6 7B, as a novel approach to assess safety perceptions of urban spaces.
The model achieved an average F1-score of 59.21% in classifying urban scenarios as safe or unsafe.
incorporating Persona-based prompts revealed significant variations in safety perceptions across the socio-demographic groups of age, gender, and nationality.
arXiv Detail & Related papers (2025-03-01T20:34:30Z) - The Hidden Strength of Disagreement: Unraveling the Consensus-Diversity Tradeoff in Adaptive Multi-Agent Systems [1.137572571250676]
We argue that implicit consensus, where agents exchange information yet independently form decisions via in-context learning, can be more effective in dynamic environments.
We formalize a consensus-diversity tradeoff, showing conditions where implicit methods outperform explicit ones.
We highlight emergent coordination via in-context learning, underscoring the value of preserving diversity for resilient decision-making.
arXiv Detail & Related papers (2025-02-23T13:12:53Z) - Democratizing Reward Design for Personal and Representative Value-Alignment [10.1630183955549]
We introduce Interactive-Reflective Dialogue Alignment, a method that iteratively engages users in reflecting on and specifying their subjective value definitions.
This system learns individual value definitions through language-model-based preference elicitation and constructs personalized reward models.
Our findings demonstrate diverse definitions of value-aligned behaviour and show that our system can accurately capture each person's unique understanding.
arXiv Detail & Related papers (2024-10-29T16:37:01Z) - EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders [5.216732191267959]
We propose a new framework, EARN Fairness, which facilitates collective metric decisions among stakeholders without requiring AI expertise.<n>The framework features an adaptable interactive system and a stakeholder-centered EARN Fairness process to explain fairness metrics, Ask stakeholders' personal metric preferences, Review metrics collectively, and Negotiate a consensus on metric selection.<n>Our work shows that the EARN Fairness framework enables stakeholders to express personal preferences and reach consensus, providing practical guidance for implementing human-centered AI fairness in high-risk contexts.
arXiv Detail & Related papers (2024-07-16T07:20:30Z) - Persona Inconstancy in Multi-Agent LLM Collaboration: Conformity, Confabulation, and Impersonation [16.82101507069166]
Multi-agent AI systems can be used for simulating collective decision-making in scientific and practical applications.
We examine AI agent ensembles engaged in cross-national collaboration and debate by analyzing their private responses and chat transcripts.
Our findings suggest that multi-agent discussions can support collective AI decisions that more often reflect diverse perspectives.
arXiv Detail & Related papers (2024-05-06T21:20:35Z) - Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs [0.0]
Two prominent trade-offs in artificial intelligence are between predictive accuracy and fairness, and between predictive accuracy and interpretability.
prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values.
I introduce a sociotechnical approach to examining the value implications of trade-offs.
arXiv Detail & Related papers (2024-03-07T05:03:18Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives [18.574420136899978]
We propose GRASP, a comprehensive disagreement analysis framework to measure group association in perspectives among different rater sub-groups.
Our framework reveals specific rater groups that have significantly different perspectives than others on certain tasks, and helps identify demographic axes that are crucial to consider in specific task contexts.
arXiv Detail & Related papers (2023-11-09T00:12:21Z) - Evaluating the Social Impact of Generative AI Systems in Systems and Society [43.32010533676472]
Generative AI systems across modalities, ranging from text (including code), image, audio, and video, have broad social impacts.
There is no official standard for means of evaluating those impacts or for which impacts should be evaluated.
We present a guide that moves toward a standard approach in evaluating a base generative AI system for any modality.
arXiv Detail & Related papers (2023-06-09T15:05:13Z) - Heterogeneous Value Alignment Evaluation for Large Language Models [91.96728871418]
Large Language Models (LLMs) have made it crucial to align their values with those of humans.
We propose a Heterogeneous Value Alignment Evaluation (HVAE) system to assess the success of aligning LLMs with heterogeneous values.
arXiv Detail & Related papers (2023-05-26T02:34:20Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Towards Quantifying the Distance between Opinions [66.29568619199074]
We find that measures based solely on text similarity or on overall sentiment often fail to effectively capture the distance between opinions.
We propose a new distance measure for capturing the similarity between opinions that leverages the nuanced observation.
In an unsupervised setting, our distance measure achieves significantly better Adjusted Rand Index scores (up to 56x) and Silhouette coefficients (up to 21x) compared to existing approaches.
arXiv Detail & Related papers (2020-01-27T16:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.