Explainable Artificial Intelligence Methods in Combating Pandemics: A
Systematic Review
- URL: http://arxiv.org/abs/2112.12705v2
- Date: Sat, 25 Dec 2021 05:06:58 GMT
- Title: Explainable Artificial Intelligence Methods in Combating Pandemics: A
Systematic Review
- Authors: Felipe Giuste, Wenqi Shi, Yuanda Zhu, Tarun Naren, Monica Isgut, Ying
Sha, Li Tong, Mitali Gupte, and May D. Wang
- Abstract summary: The impact of artificial intelligence during the COVID-19 pandemic was greatly limited by lack of model transparency.
We find that successful use of XAI can improve model performance, instill trust in the end-user, and provide the value needed to affect user decision-making.
- Score: 7.140215556873923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the myriad peer-reviewed papers demonstrating novel Artificial
Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic,
few have made significant clinical impact. The impact of artificial
intelligence during the COVID-19 pandemic was greatly limited by lack of model
transparency. This systematic review examines the use of Explainable Artificial
Intelligence (XAI) during the pandemic and how its use could overcome barriers
to real-world success. We find that successful use of XAI can improve model
performance, instill trust in the end-user, and provide the value needed to
affect user decision-making. We introduce the reader to common XAI techniques,
their utility, and specific examples of their application. Evaluation of XAI
results is also discussed as an important step to maximize the value of
AI-based clinical decision support systems. We illustrate the classical,
modern, and potential future trends of XAI to elucidate the evolution of novel
XAI techniques. Finally, we provide a checklist of suggestions during the
experimental design process supported by recent publications. Common challenges
during the implementation of AI solutions are also addressed with specific
examples of potential solutions. We hope this review may serve as a guide to
improve the clinical impact of future AI-based solutions.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Explainable AI applications in the Medical Domain: a systematic review [1.4419517737536707]
The field of Medical AI faces various challenges, in terms of building user trust, complying with regulations, using data ethically.
This paper presents a literature review on the recent developments of XAI solutions for medical decision support, based on a representative sample of 198 articles published in recent years.
arXiv Detail & Related papers (2023-08-10T08:12:17Z) - Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
Study [0.0]
This study measures cognitive load, task performance, and task time for implementation-independent XAI explanation types using a COVID-19 use case.
We found that these explanation types strongly influence end-users' cognitive load, task performance, and task time.
arXiv Detail & Related papers (2023-04-18T09:52:09Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Unbox the Black-box for the Medical Explainable AI via Multi-modal and
Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond [3.4031539425106683]
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made.
Many of the machine learning algorithms can not manifest how and why a decision has been cast.
XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies.
arXiv Detail & Related papers (2021-02-03T10:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.