Pseudo AI Bias
- URL: http://arxiv.org/abs/2210.08141v2
- Date: Sat, 31 Dec 2022 22:21:52 GMT
- Title: Pseudo AI Bias
- Authors: Xiaoming Zhai, Joseph Krajcik
- Abstract summary: Pseudo Artificial Intelligence bias (PAIB) is broadly disseminated in the literature.
PAIB due to misunderstandings, pseudo mechanical bias, and over-expectations of algorithmic predictions is socially harmful.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pseudo Artificial Intelligence bias (PAIB) is broadly disseminated in the
literature, which can result in unnecessary AI fear in society, exacerbate the
enduring inequities and disparities in access to and sharing the benefits of AI
applications, and waste social capital invested in AI research. This study
systematically reviews publications in the literature to present three types of
PAIBs identified due to: a) misunderstandings, b) pseudo mechanical bias, and
c) over-expectations. We discussed the consequences of and solutions to PAIBs,
including certifying users for AI applications to mitigate AI fears, providing
customized user guidance for AI applications, and developing systematic
approaches to monitor bias. We concluded that PAIB due to misunderstandings,
pseudo mechanical bias, and over-expectations of algorithmic predictions is
socially harmful.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Predicting the Impact of Generative AI Using an Agent-Based Model [0.0]
Generative artificial intelligence (AI) systems have transformed industries by autonomously generating content that mimics human creativity.
This paper employs agent-based modeling (ABM) to explore these implications.
The ABM integrates individual, business, and governmental agents to simulate dynamics such as education, skills acquisition, AI adoption, and regulatory responses.
arXiv Detail & Related papers (2024-08-30T13:13:56Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.