Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI
- URL: http://arxiv.org/abs/2007.14826v2
- Date: Tue, 15 Sep 2020 15:15:16 GMT
- Title: Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI
- Authors: Marek Havrda and Bogdana Rakova
- Abstract summary: We propose the practical application of an enhanced well-being impact assessment framework for Autonomous and Intelligent Systems.
This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) has an increasing impact on all areas of
people's livelihoods. A detailed look at existing interdisciplinary and
transdisciplinary metrics frameworks could bring new insights and enable
practitioners to navigate the challenge of understanding and assessing the
impact of Autonomous and Intelligent Systems (A/IS). There has been emerging
consensus on fundamental ethical and rights-based AI principles proposed by
scholars, governments, civil rights organizations, and technology companies. In
order to move from principles to real-world implementation, we adopt a lens
motivated by regulatory impact assessments and the well-being movement in
public policy. Similar to public policy interventions, outcomes of AI systems
implementation may have far-reaching complex impacts. In public policy,
indicators are only part of a broader toolbox, as metrics inherently lead to
gaming and dissolution of incentives and objectives. Similarly, in the case of
A/IS, there's a need for a larger toolbox that allows for the iterative
assessment of identified impacts, inclusion of new impacts in the analysis, and
identification of emerging trade-offs. In this paper, we propose the practical
application of an enhanced well-being impact assessment framework for A/IS that
could be employed to address ethical and rights-based normative principles in
AI. This process could enable a human-centered algorithmically-supported
approach to the understanding of the impacts of AI systems. Finally, we propose
a new testing infrastructure which would allow for governments, civil rights
organizations, and others, to engage in cooperating with A/IS developers
towards implementation of enhanced well-being impact assessments.
Related papers
- Crossing the principle-practice gap in AI ethics with ethical problem-solving [0.0]
How to bridge the principle-practice gap separating ethical discourse from the technical side of AI development remains an open problem.
EPS is a methodology promoting responsible, human-centric, and value-oriented AI development.
We utilize EPS as a blueprint to propose the implementation of Ethics as a Service Platform.
arXiv Detail & Related papers (2024-04-16T14:35:13Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
Particip-AI is a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
We gather responses from 295 demographically diverse participants.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Worldwide AI Ethics: a review of 200 guidelines and recommendations for
AI governance [0.0]
This paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide.
We identify at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool.
We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
arXiv Detail & Related papers (2022-06-23T18:03:04Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - The Different Faces of AI Ethics Across the World: A
Principle-Implementation Gap Analysis [12.031113181911627]
Artificial Intelligence (AI) is transforming our daily life with several applications in healthcare, space exploration, banking and finance.
These rapid progresses in AI have brought increasing attention to the potential impacts of AI technologies on society.
Several ethical principles have been released by governments, national and international organisations.
These principles outline high-level precepts to guide the ethical development, deployment, and governance of AI.
arXiv Detail & Related papers (2022-05-12T22:41:08Z) - Institutionalising Ethics in AI through Broader Impact Requirements [8.793651996676095]
We reflect on a novel governance initiative by one of the world's largest AI conferences.
NeurIPS introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research.
We investigate the risks, challenges and potential benefits of such an initiative.
arXiv Detail & Related papers (2021-05-30T12:36:43Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Principles to Practices for Responsible AI: Closing the Gap [0.1749935196721634]
We argue that an impact assessment framework is a promising approach to close the principles-to-practices gap.
We review a case study of AI's use in forest ecosystem restoration, demonstrating how an impact assessment framework can translate into effective and responsible AI practices.
arXiv Detail & Related papers (2020-06-08T16:04:44Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.