On the meaning of uncertainty for ethical AI: philosophy and practice
- URL: http://arxiv.org/abs/2309.05529v1
- Date: Mon, 11 Sep 2023 15:13:36 GMT
- Title: On the meaning of uncertainty for ethical AI: philosophy and practice
- Authors: Cassandra Bird, Daniel Williamson and Sabina Leonelli (University of
Exeter)
- Abstract summary: We argue that this is a significant way to bring ethical considerations into mathematical reasoning.
We demonstrate these ideas within the context of competing models used to advise the UK government on the spread of the Omicron variant of COVID-19 during December 2021.
- Score: 10.591284030838146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Whether and how data scientists, statisticians and modellers should be
accountable for the AI systems they develop remains a controversial and highly
debated topic, especially given the complexity of AI systems and the
difficulties in comparing and synthesising competing claims arising from their
deployment for data analysis. This paper proposes to address this issue by
decreasing the opacity and heightening the accountability of decision making
using AI systems, through the explicit acknowledgement of the statistical
foundations that underpin their development and the ways in which these dictate
how their results should be interpreted and acted upon by users. In turn, this
enhances (1) the responsiveness of the models to feedback, (2) the quality and
meaning of uncertainty on their outputs and (3) their transparency to
evaluation. To exemplify this approach, we extend Posterior Belief Assessment
to offer a route to belief ownership from complex and competing AI structures.
We argue that this is a significant way to bring ethical considerations into
mathematical reasoning, and to implement ethical AI in statistical practice. We
demonstrate these ideas within the context of competing models used to advise
the UK government on the spread of the Omicron variant of COVID-19 during
December 2021.
Related papers
- To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems [11.690126756498223]
Vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems.
In practice, the performance disparity of machine learning models on out-of-distribution data makes dataset-specific performance feedback unreliable.
arXiv Detail & Related papers (2024-09-22T09:43:27Z) - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - A Review of the Role of Causality in Developing Trustworthy AI Systems [16.267806768096026]
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that governs human understanding of the real world.
Recently, causal modeling and inference methods have emerged as powerful tools to improve the trustworthiness aspects of AI models.
arXiv Detail & Related papers (2023-02-14T11:08:26Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - AI Assurance using Causal Inference: Application to Public Policy [0.0]
Most AI approaches can only be represented as "black boxes" and suffer from the lack of transparency.
It is crucial not only to develop effective and robust AI systems, but to make sure their internal processes are explainable and fair.
arXiv Detail & Related papers (2021-12-01T16:03:06Z) - Descriptive AI Ethics: Collecting and Understanding the Public Opinion [10.26464021472619]
This work proposes a mixed AI ethics model that allows normative and descriptive research to complement each other.
We discuss its implications on bridging the gap between optimistic and pessimistic views towards AI systems' deployment.
arXiv Detail & Related papers (2021-01-15T03:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.