Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML
Systems
- URL: http://arxiv.org/abs/2007.02203v6
- Date: Sat, 2 Oct 2021 22:01:32 GMT
- Title: Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML
Systems
- Authors: A. Feder Cooper, Karen Levy, Christopher De Sa
- Abstract summary: Trade-offs between accuracy and efficiency pervade law, public health, and other non-computing domains.
We argue that since examining these trade-offs has been useful for guiding governance in other domains, we need to similarly reckon with these trade-offs in governing computer systems.
- Score: 32.79201607581628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trade-offs between accuracy and efficiency pervade law, public health, and
other non-computing domains, which have developed policies to guide how to
balance the two in conditions of uncertainty. While computer science also
commonly studies accuracy-efficiency trade-offs, their policy implications
remain poorly examined. Drawing on risk assessment practices in the US, we
argue that, since examining these trade-offs has been useful for guiding
governance in other domains, we need to similarly reckon with these trade-offs
in governing computer systems. We focus our analysis on distributed machine
learning systems. Understanding the policy implications in this area is
particularly urgent because such systems, which include autonomous vehicles,
tend to be high-stakes and safety-critical. We 1) describe how the trade-off
takes shape for these systems, 2) highlight gaps between existing US risk
assessment standards and what these systems require to be properly assessed,
and 3) make specific calls to action to facilitate accountability when
hypothetical risks concerning the accuracy-efficiency trade-off become realized
as accidents in the real world. We close by discussing how such accountability
mechanisms encourage more just, transparent governance aligned with public
values.
Related papers
- Disciplining deliberation: a sociotechnical perspective on machine
learning trade-offs [0.0]
This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI)
I show how neglecting these considerations can distort our normative deliberations, and result in costly and misaligned interventions and justifications.
I end by drawing out the normative opportunities and challenges that emerge out of these considerations, and highlighting the imperative of interdisciplinary collaboration in fostering responsible AI.
arXiv Detail & Related papers (2024-03-07T05:03:18Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Fairness in Contextual Resource Allocation Systems: Metrics and
Incompatibility Results [7.705334602362225]
We study systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing.
These systems often support communities disproportionately affected by systemic racial, gender, or other injustices.
We propose a framework for evaluating fairness in contextual resource allocation systems inspired by fairness metrics in machine learning.
arXiv Detail & Related papers (2022-12-04T02:30:58Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - Explanations of Machine Learning predictions: a mandatory step for its
application to Operational Processes [61.20223338508952]
Credit Risk Modelling plays a paramount role.
Recent machine and deep learning techniques have been applied to the task.
We suggest to use LIME technique to tackle the explainability problem in this field.
arXiv Detail & Related papers (2020-12-30T10:27:59Z) - Empirical observation of negligible fairness-accuracy trade-offs in
machine learning for public policy [13.037143215464132]
We show that fairness-accuracy trade-offs in many applications are negligible in practice.
We find that explicitly focusing on achieving equity and using our proposed post-hoc disparity mitigation methods, fairness was substantially improved without sacrificing accuracy.
arXiv Detail & Related papers (2020-12-05T08:10:47Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.