Fairness via AI: Bias Reduction in Medical Information
- URL: http://arxiv.org/abs/2109.02202v1
- Date: Mon, 6 Sep 2021 01:39:48 GMT
- Title: Fairness via AI: Bias Reduction in Medical Information
- Authors: Shiri Dori-Hacohen, Roberto Montenegro, Fabricio Murai, Scott A. Hale,
Keen Sung, Michela Blain, Jennifer Edwards-Johnson
- Abstract summary: We propose a novel framework of Fairness via AI, inspired by insights from medical education, sociology and antiracism.
We propose using AI to study, detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society.
- Score: 3.254836540242099
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Most Fairness in AI research focuses on exposing biases in AI systems. A
broader lens on fairness reveals that AI can serve a greater aspiration:
rooting out societal inequities from their source. Specifically, we focus on
inequities in health information, and aim to reduce bias in that domain using
AI. The AI algorithms under the hood of search engines and social media, many
of which are based on recommender systems, have an outsized impact on the
quality of medical and health information online. Therefore, embedding bias
detection and reduction into these recommender systems serving up medical and
health content online could have an outsized positive impact on patient
outcomes and wellbeing.
In this position paper, we offer the following contributions: (1) we propose
a novel framework of Fairness via AI, inspired by insights from medical
education, sociology and antiracism; (2) we define a new term, bisinformation,
which is related to, but distinct from, misinformation, and encourage
researchers to study it; (3) we propose using AI to study, detect and mitigate
biased, harmful, and/or false health information that disproportionately hurts
minority groups in society; and (4) we suggest several pillars and pose several
open problems in order to seed inquiry in this new space. While part (3) of
this work specifically focuses on the health domain, the fundamental computer
science advances and contributions stemming from research efforts in bias
reduction and Fairness via AI have broad implications in all areas of society.
Related papers
- Open Challenges on Fairness of Artificial Intelligence in Medical Imaging Applications [3.8236840661885485]
The chapter first discusses various sources of bias, including data collection, model training, and clinical deployment.
We then turn to discussing open challenges that we believe require attention from researchers and practitioners.
arXiv Detail & Related papers (2024-07-24T02:41:19Z) - Towards Clinical AI Fairness: Filling Gaps in the Puzzle [15.543248260582217]
This review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions.
We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized.
To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities.
arXiv Detail & Related papers (2024-05-28T07:42:55Z) - A survey of recent methods for addressing AI fairness and bias in
biomedicine [48.46929081146017]
Artificial intelligence systems may perpetuate social inequities or demonstrate biases, such as those based on race or gender.
We surveyed recent publications on different debiasing methods in the fields of biomedical natural language processing (NLP) or computer vision (CV)
We performed a literature search on PubMed, ACM digital library, and IEEE Xplore of relevant articles published between January 2018 and December 2023 using multiple combinations of keywords.
We reviewed other potential methods from the general domain that could be applied to biomedicine to address bias and improve fairness.
arXiv Detail & Related papers (2024-02-13T06:38:46Z) - Ensuring Trustworthy Medical Artificial Intelligence through Ethical and
Philosophical Principles [4.705984758887425]
AI-based computer-assisted diagnosis and treatment tools can democratize healthcare by matching the clinical level or surpassing clinical experts.
The democratization of such AI tools can reduce the cost of care, optimize resource allocation, and improve the quality of care.
integrating AI into healthcare raises several ethical and philosophical concerns, such as bias, transparency, autonomy, responsibility, and accountability.
arXiv Detail & Related papers (2023-04-23T04:14:18Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains [0.0]
Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence (FARAI)
We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications.
The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators.
arXiv Detail & Related papers (2020-06-12T14:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.