Explainable AI in Orthopedics: Challenges, Opportunities, and Prospects
- URL: http://arxiv.org/abs/2308.04696v1
- Date: Wed, 9 Aug 2023 04:15:10 GMT
- Title: Explainable AI in Orthopedics: Challenges, Opportunities, and Prospects
- Authors: Soheyla Amirian, Luke A. Carlson, Matthew F. Gong, Ines Lohse, Kurt R.
Weiss, Johannes F. Plate, and Ahmad P. Tafti
- Abstract summary: This work emphasizes the need for interdisciplinary collaborations between AI practitioners, orthopedic specialists, and regulatory entities to establish standards and guidelines for the adoption of XAI in orthopedics.
- Score: 0.5277024349608834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While artificial intelligence (AI) has made many successful applications in
various domains, its adoption in healthcare lags a little bit behind other
high-stakes settings. Several factors contribute to this slower uptake,
including regulatory frameworks, patient privacy concerns, and data
heterogeneity. However, one significant challenge that impedes the
implementation of AI in healthcare, particularly in orthopedics, is the lack of
explainability and interpretability around AI models. Addressing the challenge
of explainable AI (XAI) in orthopedics requires developing AI models and
algorithms that prioritize transparency and interpretability, allowing
clinicians, surgeons, and patients to understand the contributing factors
behind any AI-powered predictive or descriptive models. The current
contribution outlines several key challenges and opportunities that manifest in
XAI in orthopedic practice. This work emphasizes the need for interdisciplinary
collaborations between AI practitioners, orthopedic specialists, and regulatory
entities to establish standards and guidelines for the adoption of XAI in
orthopedics.
Related papers
- Artificial intelligence techniques in inherited retinal diseases: A review [19.107474958408847]
Inherited retinal diseases (IRDs) are a diverse group of genetic disorders that lead to progressive vision loss and are a major cause of blindness in working-age adults.
Recent advancements in artificial intelligence (AI) offer promising solutions to these challenges.
This review consolidates existing studies, identifies gaps, and provides an overview of AI's potential in diagnosing and managing IRDs.
arXiv Detail & Related papers (2024-10-10T03:14:51Z) - AI-Driven Healthcare: A Survey on Ensuring Fairness and Mitigating Bias [2.398440840890111]
AI applications have significantly improved diagnostic accuracy, treatment personalization, and patient outcome predictions.
These advancements also introduce substantial ethical and fairness challenges.
These biases can lead to disparities in healthcare delivery, affecting diagnostic accuracy and treatment outcomes across different demographic groups.
arXiv Detail & Related papers (2024-07-29T02:39:17Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Current State of Community-Driven Radiological AI Deployment in Medical
Imaging [1.474525456020066]
This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium.
We identify barriers between AI-model development in research labs and subsequent clinical deployment.
We discuss various AI integration points in a clinical Radiology workflow.
arXiv Detail & Related papers (2022-12-29T05:17:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.