The Cost of Arbitrariness for Individuals: Examining the Legal and Technical Challenges of Model Multiplicity
- URL: http://arxiv.org/abs/2407.13070v2
- Date: Fri, 13 Sep 2024 09:33:20 GMT
- Title: The Cost of Arbitrariness for Individuals: Examining the Legal and Technical Challenges of Model Multiplicity
- Authors: Prakhar Ganesh, Ihsan Ibrahim Daldaban, Ignacio Cofone, Golnoosh Farnadi,
- Abstract summary: This paper explores various individual concerns stemming from multiplicity, including the effects of arbitrariness beyond final predictions.
It provides both an empirical examination of these concerns and a comprehensive analysis from the legal standpoint, addressing how these issues are perceived in the anti-discrimination law in Canada.
We conclude the discussion with technical challenges in the current landscape of model multiplicity to meet legal requirements and the legal gap between current law and the implications of arbitrariness in model selection.
- Score: 4.514832807541816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model multiplicity, the phenomenon where multiple models achieve similar performance despite different underlying learned functions, introduces arbitrariness in model selection. While this arbitrariness may seem inconsequential in expectation, its impact on individuals can be severe. This paper explores various individual concerns stemming from multiplicity, including the effects of arbitrariness beyond final predictions, disparate arbitrariness for individuals belonging to protected groups, and the challenges associated with the arbitrariness of a single algorithmic system creating a monopoly across various contexts. It provides both an empirical examination of these concerns and a comprehensive analysis from the legal standpoint, addressing how these issues are perceived in the anti-discrimination law in Canada. We conclude the discussion with technical challenges in the current landscape of model multiplicity to meet legal requirements and the legal gap between current law and the implications of arbitrariness in model selection, highlighting relevant future research directions for both disciplines.
Related papers
- The Curious Case of Arbitrariness in Machine Learning [4.932130498861987]
Algorithmic modelling relies on limited information in data to extrapolate outcomes for unseen scenarios, often embedding an element of arbitrariness in its decisions.
A perspective on this arbitrariness that has recently gained interest is multiplicity-the study of arbitrariness across a set of "good models"
We systemize the literature on multiplicity by: (a) formalizing the terminology around model design choices and their contribution to arbitrariness, (b) expanding the definition of multiplicity to incorporate underrepresented forms beyond just predictions and explanations, and (d) distilling the benefits and potential risks of multiplicity
arXiv Detail & Related papers (2025-01-24T22:45:09Z) - Perceptions of the Fairness Impacts of Multiplicity in Machine Learning [22.442918897954957]
Multiplicity - the existence of multiple good models - means that some predictions are essentially arbitrary.
We conduct a survey to see how multiplicity impacts lay stakeholders' perceptions of machine learning fairness.
Our results indicate that model developers should be intentional about dealing with multiplicity in order to maintain fairness.
arXiv Detail & Related papers (2024-09-18T21:57:51Z) - The Legal Duty to Search for Less Discriminatory Algorithms [4.625678906362822]
We argue that the law should place a duty of a reasonable search for LDAs.
Model multiplicity and the availability of LDAs have significant ramifications for the legal response to discriminatory algorithms.
We argue that the law should place a duty of a reasonable search for LDAs on entities that develop and deploy predictive models in covered civil rights domains.
arXiv Detail & Related papers (2024-06-10T21:56:38Z) - Multi-Defendant Legal Judgment Prediction via Hierarchical Reasoning [49.23103067844278]
We propose the task of multi-defendant LJP, which aims to automatically predict the judgment results for each defendant of multi-defendant cases.
Two challenges arise with the task of multi-defendant LJP: (1) indistinguishable judgment results among various defendants; and (2) the lack of a real-world dataset for training and evaluation.
arXiv Detail & Related papers (2023-12-10T04:46:30Z) - ACQUIRED: A Dataset for Answering Counterfactual Questions In Real-Life
Videos [53.92440577914417]
ACQUIRED consists of 3.9K annotated videos, encompassing a wide range of event types and incorporating both first and third-person viewpoints.
Each video is annotated with questions that span three distinct dimensions of reasoning, including physical, social, and temporal.
We benchmark our dataset against several state-of-the-art language-only and multimodal models and experimental results demonstrate a significant performance gap.
arXiv Detail & Related papers (2023-11-02T22:17:03Z) - Multi-Target Multiplicity: Flexibility and Fairness in Target
Specification under Resource Constraints [76.84999501420938]
We introduce a conceptual and computational framework for assessing how the choice of target affects individuals' outcomes.
We show that the level of multiplicity that stems from target variable choice can be greater than that stemming from nearly-optimal models of a single target.
arXiv Detail & Related papers (2023-06-23T18:57:14Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.