Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI
- URL: http://arxiv.org/abs/2007.14826v2
- Date: Tue, 15 Sep 2020 15:15:16 GMT
- Title: Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI
- Authors: Marek Havrda and Bogdana Rakova
- Abstract summary: We propose the practical application of an enhanced well-being impact assessment framework for Autonomous and Intelligent Systems.
This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) has an increasing impact on all areas of
people's livelihoods. A detailed look at existing interdisciplinary and
transdisciplinary metrics frameworks could bring new insights and enable
practitioners to navigate the challenge of understanding and assessing the
impact of Autonomous and Intelligent Systems (A/IS). There has been emerging
consensus on fundamental ethical and rights-based AI principles proposed by
scholars, governments, civil rights organizations, and technology companies. In
order to move from principles to real-world implementation, we adopt a lens
motivated by regulatory impact assessments and the well-being movement in
public policy. Similar to public policy interventions, outcomes of AI systems
implementation may have far-reaching complex impacts. In public policy,
indicators are only part of a broader toolbox, as metrics inherently lead to
gaming and dissolution of incentives and objectives. Similarly, in the case of
A/IS, there's a need for a larger toolbox that allows for the iterative
assessment of identified impacts, inclusion of new impacts in the analysis, and
identification of emerging trade-offs. In this paper, we propose the practical
application of an enhanced well-being impact assessment framework for A/IS that
could be employed to address ethical and rights-based normative principles in
AI. This process could enable a human-centered algorithmically-supported
approach to the understanding of the impacts of AI systems. Finally, we propose
a new testing infrastructure which would allow for governments, civil rights
organizations, and others, to engage in cooperating with A/IS developers
towards implementation of enhanced well-being impact assessments.
Related papers
- Where Assessment Validation and Responsible AI Meet [0.0876953078294908]
We propose a unified assessment framework that considers classical test validation theory and assessment-specific and domain-agnostic RAI principles and practice.
The framework addresses responsible AI use for assessment that supports validity arguments, alignment with AI ethics to maintain human values and oversight, and broader social responsibility associated with AI use.
arXiv Detail & Related papers (2024-11-04T20:20:29Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Crossing the principle-practice gap in AI ethics with ethical problem-solving [0.0]
How to bridge the principle-practice gap separating ethical discourse from the technical side of AI development remains an open problem.
EPS is a methodology promoting responsible, human-centric, and value-oriented AI development.
We utilize EPS as a blueprint to propose the implementation of Ethics as a Service Platform.
arXiv Detail & Related papers (2024-04-16T14:35:13Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Worldwide AI Ethics: a review of 200 guidelines and recommendations for
AI governance [0.0]
This paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide.
We identify at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool.
We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
arXiv Detail & Related papers (2022-06-23T18:03:04Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Institutionalising Ethics in AI through Broader Impact Requirements [8.793651996676095]
We reflect on a novel governance initiative by one of the world's largest AI conferences.
NeurIPS introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research.
We investigate the risks, challenges and potential benefits of such an initiative.
arXiv Detail & Related papers (2021-05-30T12:36:43Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Principles to Practices for Responsible AI: Closing the Gap [0.1749935196721634]
We argue that an impact assessment framework is a promising approach to close the principles-to-practices gap.
We review a case study of AI's use in forest ecosystem restoration, demonstrating how an impact assessment framework can translate into effective and responsible AI practices.
arXiv Detail & Related papers (2020-06-08T16:04:44Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.