Progressing Towards Responsible AI
- URL: http://arxiv.org/abs/2008.07326v1
- Date: Tue, 11 Aug 2020 09:46:00 GMT
- Title: Progressing Towards Responsible AI
- Authors: Teresa Scantamburlo, Atia Cort\'es, Marie Schacht
- Abstract summary: Observatory on Society and Artificial Intelligence (OSAI) grew out of the project AI4EU.
OSAI aims to stimulate reflection on a broad spectrum of issues of AI (ethical, legal, social, economic and cultural)
- Score: 2.191505742658975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of Artificial Intelligence (AI) and, in particular, the Machine
Learning area, counts on a wide range of performance metrics and benchmark data
sets to assess the problem-solving effectiveness of its solutions. However, the
appearance of research centres, projects or institutions addressing AI
solutions from a multidisciplinary and multi-stakeholder perspective suggests a
new approach to assessment comprising ethical guidelines, reports or tools and
frameworks to help both academia and business to move towards a responsible
conceptualisation of AI. They all highlight the relevance of three key aspects:
(i) enhancing cooperation among the different stakeholders involved in the
design, deployment and use of AI; (ii) promoting multidisciplinary dialogue,
including different domains of expertise in this process; and (iii) fostering
public engagement to maximise a trusted relation with new technologies and
practitioners. In this paper, we introduce the Observatory on Society and
Artificial Intelligence (OSAI), an initiative grew out of the project AI4EU
aimed at stimulating reflection on a broad spectrum of issues of AI (ethical,
legal, social, economic and cultural). In particular, we describe our work in
progress around OSAI and suggest how this and similar initiatives can promote a
wider appraisal of progress in AI. This will give us the opportunity to present
our vision and our modus operandi to enhance the implementation of these three
fundamental dimensions.
Related papers
- Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - A Vision for Operationalising Diversity and Inclusion in AI [5.4897262701261225]
This study seeks to envision the operationalization of the ethical imperatives of diversity and inclusion (D&I) within AI ecosystems.
A significant challenge in AI development is the effective operationalization of D&I principles.
This paper proposes a vision of a framework for developing a tool utilizing persona-based simulation by Generative AI (GenAI)
arXiv Detail & Related papers (2023-12-11T02:44:39Z) - Assessing AI Impact Assessments: A Classroom Study [14.768235460961876]
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems.
Recent efforts from government or private-sector organizations have proposed many diverse instantiations of AIIAs, which take a variety of forms ranging from open-ended questionnaires to graded score-cards.
We conduct a classroom study at a large research-intensive university (R1) in an elective course focused on the societal and ethical implications of AI.
We find preliminary evidence that impact assessments can influence participants' perceptions of the potential
arXiv Detail & Related papers (2023-11-19T01:00:59Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Towards Implementing Responsible AI [22.514717870367623]
We propose four aspects of AI system design and development, adapting processes used in software engineering.
The salient findings cover four aspects of AI system design and development, adapting processes used in software engineering.
arXiv Detail & Related papers (2022-05-09T14:59:23Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.