Explainable AI for Bioinformatics: Methods, Tools, and Applications
- URL: http://arxiv.org/abs/2212.13261v1
- Date: Sun, 25 Dec 2022 21:00:36 GMT
- Title: Explainable AI for Bioinformatics: Methods, Tools, and Applications
- Authors: Md. Rezaul Karim, Tanhim Islam, Oya Beyan, Christoph Lange, Michael
Cochez, Dietrich Rebholz-Schuhmann and Stefan Decker
- Abstract summary: Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models.
In this paper, we discuss the importance of explainability with a focus on bioinformatics.
- Score: 1.6855835471222005
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial intelligence(AI) systems based on deep neural networks (DNNs) and
machine learning (ML) algorithms are increasingly used to solve critical
problems in bioinformatics, biomedical informatics, and precision medicine.
However, complex DNN or ML models that are unavoidably opaque and perceived as
black-box methods, may not be able to explain why and how they make certain
decisions. Such black-box models are difficult to comprehend not only for
targeted users and decision-makers but also for AI developers. Besides, in
sensitive areas like healthcare, explainability and accountability are not only
desirable properties of AI but also legal requirements -- especially when AI
may have significant impacts on human lives. Explainable artificial
intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of
black-box models and make it possible to interpret how AI systems make their
decisions with transparency. An interpretable ML model can explain how it makes
predictions and which factors affect the model's outcomes. The majority of
state-of-the-art interpretable ML methods have been developed in a
domain-agnostic way and originate from computer vision, automated reasoning, or
even statistics. Many of these methods cannot be directly applied to
bioinformatics problems, without prior customization, extension, and domain
adoption. In this paper, we discuss the importance of explainability with a
focus on bioinformatics. We analyse and comprehensively overview of
model-specific and model-agnostic interpretable ML methods and tools. Via
several case studies covering bioimaging, cancer genomics, and biomedical text
mining, we show how bioinformatics research could benefit from XAI methods and
how they could help improve decision fairness.
Related papers
- An Evaluation of Large Language Models in Bioinformatics Research [52.100233156012756]
We study the performance of large language models (LLMs) on a wide spectrum of crucial bioinformatics tasks.
These tasks include the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti-cancer peptides, molecular optimization, and resolution of educational bioinformatics problems.
Our findings indicate that, given appropriate prompts, LLMs like GPT variants can successfully handle most of these tasks.
arXiv Detail & Related papers (2024-02-21T11:27:31Z) - Interpretable Medical Imagery Diagnosis with Self-Attentive
Transformers: A Review of Explainable AI for Health Care [2.7195102129095003]
Vision Transformers (ViT) have emerged as state-of-the-art computer vision models, benefiting from self-attention modules.
Deep-learning models are complex and are often treated as a "black box" that can cause uncertainty regarding how they operate.
This review summarises recent ViT advancements and interpretative approaches to understanding the decision-making process of ViT.
arXiv Detail & Related papers (2023-09-01T05:01:52Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Analysis of Explainable Artificial Intelligence Methods on Medical Image
Classification [0.0]
The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems.
Medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks.
The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI)
arXiv Detail & Related papers (2022-12-10T06:17:43Z) - OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence
for Digital Agriculture [4.286327408435937]
We build an Agriculture Computing Ontology (AgriComO) to explain the knowledge mined in agriculture.
XAI tries to provide human-understandable explanations for decision-making and trained AI models.
arXiv Detail & Related papers (2022-09-29T21:20:25Z) - A Comparative Approach to Explainable Artificial Intelligence Methods in
Application to High-Dimensional Electronic Health Records: Examining the
Usability of XAI [0.0]
XAI aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means.
The ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum.
XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level.
arXiv Detail & Related papers (2021-03-08T18:15:52Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Machine Learning in Nano-Scale Biomedical Engineering [77.75587007080894]
We review the existing research regarding the use of machine learning in nano-scale biomedical engineering.
The main challenges that can be formulated as ML problems are classified into the three main categories.
For each of the presented methodologies, special emphasis is given to its principles, applications, and limitations.
arXiv Detail & Related papers (2020-08-05T15:45:54Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.