What do AI/ML practitioners think about AI/ML bias?
- URL: http://arxiv.org/abs/2407.08895v1
- Date: Thu, 11 Jul 2024 23:43:25 GMT
- Title: What do AI/ML practitioners think about AI/ML bias?
- Authors: Aastha Pant, Rashina Hoda, Burak Turhan, Chakkrit Tantithamthavorn,
- Abstract summary: Our studies have revealed a discrepancy between practitioners' understanding of 'AI/ML bias' and the definitions of tech companies and researchers.
These efforts could yield a significant return on investment by aiding AI/ML practitioners in developing unbiased AI/ML systems.
- Score: 11.846525587357489
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI leaders and companies have much to offer to AI/ML practitioners to support them in addressing and mitigating biases in the AI/ML systems they develop. AI/ML practitioners need to receive the necessary resources and support from experts to develop unbiased AI/ML systems. However, our studies have revealed a discrepancy between practitioners' understanding of 'AI/ML bias' and the definitions of tech companies and researchers. This indicates a misalignment that needs addressing. Efforts should be made to match practitioners' understanding of AI/ML bias with the definitions developed by tech companies and researchers. These efforts could yield a significant return on investment by aiding AI/ML practitioners in developing unbiased AI/ML systems.
Related papers
- Navigating Fairness: Practitioners' Understanding, Challenges, and Strategies in AI/ML Development [11.846525587357489]
There is a lack of empirical studies focused on understanding the views and experiences of AI practitioners in developing a fair AI/ML.
We conducted semi-structured interviews with 22 AI practitioners to investigate their understanding of what a 'fair AI/ML' is.
We developed a framework showcasing the relationship between AI practitioners' understanding of 'fair AI/ML' and their challenges in its development.
arXiv Detail & Related papers (2024-03-21T03:44:59Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Ethics in the Age of AI: An Analysis of AI Practitioners' Awareness and
Challenges [11.656193349991609]
We conducted a survey aimed at understanding AI practitioners' awareness of AI ethics and their challenges in incorporating ethics.
Based on 100 AI practitioners' responses, our findings indicate that majority of AI practitioners had a reasonable familiarity with the concept of AI ethics.
Formal education/training was considered somewhat helpful in preparing practitioners to incorporate AI ethics.
arXiv Detail & Related papers (2023-07-14T02:50:46Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On Two XAI Cultures: A Case Study of Non-technical Explanations in
Deployed AI System [3.4918511133757977]
Not much of XAI is comprehensible to non-AI experts, who nonetheless are the primary audience and major stakeholders of deployed AI systems in practice.
We advocate that it is critical to develop XAI methods for non-technical audiences.
We then present a real-life case study, where AI experts provided non-technical explanations of AI decisions to non-technical stakeholders.
arXiv Detail & Related papers (2021-12-02T07:02:27Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers [0.0]
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
arXiv Detail & Related papers (2021-05-05T15:23:12Z) - Advancing the Research and Development of Assured Artificial
Intelligence and Machine Learning Capabilities [2.688723831634804]
An adversarial AI (A2I) and adversarial ML (AML) attack seeks to deceive and manipulate AI/ML models.
It is imperative that AI/ML models can defend against these attacks.
The A2I Working Group (A2IWG) seeks to advance the research and development of assured AI/ML capabilities.
arXiv Detail & Related papers (2020-09-24T20:12:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.