Risk of AI in Healthcare: A Comprehensive Literature Review and Study
Framework
- URL: http://arxiv.org/abs/2309.14530v1
- Date: Mon, 25 Sep 2023 21:09:21 GMT
- Title: Risk of AI in Healthcare: A Comprehensive Literature Review and Study
Framework
- Authors: Apoorva Muley, Prathamesh Muzumdar, George Kurian, and Ganga Prasad
Basyal
- Abstract summary: This study conducts a thorough examination of the research stream focusing on AI risks in healthcare, aiming to explore the distinct genres within this domain.
A selection criterion was employed to carefully analyze 39 articles to identify three primary genres of AI risks prevalent in healthcare: clinical data risks, technical risks, and socio-ethical risks.
- Score: 0.5130062125323206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study conducts a thorough examination of the research stream focusing on
AI risks in healthcare, aiming to explore the distinct genres within this
domain. A selection criterion was employed to carefully analyze 39 articles to
identify three primary genres of AI risks prevalent in healthcare: clinical
data risks, technical risks, and socio-ethical risks. Selection criteria was
based on journal ranking and impact factor. The research seeks to provide a
valuable resource for future healthcare researchers, furnishing them with a
comprehensive understanding of the complex challenges posed by AI
implementation in healthcare settings. By categorizing and elucidating these
genres, the study aims to facilitate the development of empirical qualitative
and quantitative research, fostering evidence-based approaches to address
AI-related risks in healthcare effectively. This endeavor contributes to
building a robust knowledge base that can inform the formulation of risk
mitigation strategies, ensuring safe and efficient integration of AI
technologies in healthcare practices. Thus, it is important to study AI risks
in healthcare to build better and efficient AI systems and mitigate risks.
Related papers
- Safety challenges of AI in medicine [23.817939398729955]
Review examines potential risks in AI practices that may compromise safety in medicine.
Examines reduced performance across diverse populations, inconsistent operational stability, the need for high-quality data for effective model tuning, and the risk of data breaches during model development and deployment.
Second part of this article explores safety issues specific to large language models (LLMs) in medical contexts.
arXiv Detail & Related papers (2024-09-11T13:47:47Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - AI-Driven Healthcare: A Survey on Ensuring Fairness and Mitigating Bias [2.398440840890111]
AI applications have significantly improved diagnostic accuracy, treatment personalization, and patient outcome predictions.
These advancements also introduce substantial ethical and fairness challenges.
These biases can lead to disparities in healthcare delivery, affecting diagnostic accuracy and treatment outcomes across different demographic groups.
arXiv Detail & Related papers (2024-07-29T02:39:17Z) - Reporting Risks in AI-based Assistive Technology Research: A Systematic Review [2.928964540437144]
We conducted a systematic literature review of research into AI-based assistive technology for persons with visual impairments.
Our study shows that most proposed technologies with a testable prototype have not been evaluated in a human study with members of the sight-loss community.
arXiv Detail & Related papers (2024-07-01T05:22:44Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Emotional Intelligence Through Artificial Intelligence : NLP and Deep Learning in the Analysis of Healthcare Texts [1.9374282535132377]
This manuscript presents a methodical examination of the utilization of Artificial Intelligence in the assessment of emotions in texts related to healthcare.
We scrutinize numerous research studies that employ AI to augment sentiment analysis, categorize emotions, and forecast patient outcomes.
There persist challenges, which encompass ensuring the ethical application of AI, safeguarding patient confidentiality, and addressing potential biases in algorithmic procedures.
arXiv Detail & Related papers (2024-03-14T15:58:13Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [9.262092738841979]
AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
arXiv Detail & Related papers (2022-09-13T21:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.