Challenges facing the explainability of age prediction models: case
study for two modalities
- URL: http://arxiv.org/abs/2303.06640v1
- Date: Sun, 12 Mar 2023 11:51:21 GMT
- Title: Challenges facing the explainability of age prediction models: case
study for two modalities
- Authors: Mikolaj Spytek, Weronika Hryniewska-Guzik, Jaroslaw Zygierewicz, Jacek
Rogala, Przemyslaw Biecek
- Abstract summary: We investigate the use of Explainable Artificial Intelligence (XAI) for age prediction focusing on two specific modalities, EEG signal and lung X-rays.
We share predictive models for age to facilitate further research on new techniques to explain models for these modalities.
- Score: 4.000351859705655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prediction of age is a challenging task with various practical
applications in high-impact fields like the healthcare domain or criminology.
Despite the growing number of models and their increasing performance, we
still know little about how these models work. Numerous examples of failures of
AI systems show that performance alone is insufficient, thus, new methods are
needed to explore and explain the reasons for the model's predictions.
In this paper, we investigate the use of Explainable Artificial Intelligence
(XAI) for age prediction focusing on two specific modalities, EEG signal and
lung X-rays. We share predictive models for age to facilitate further research
on new techniques to explain models for these modalities.
Related papers
- Learning-based Models for Vulnerability Detection: An Extensive Study [3.1317409221921144]
We extensively and comprehensively investigate two types of state-of-the-art learning-based approaches.
We experimentally demonstrate the priority of sequence-based models and the limited abilities of both graph-based models.
arXiv Detail & Related papers (2024-08-14T13:01:30Z) - Recent Advances in Predictive Modeling with Electronic Health Records [71.19967863320647]
utilizing EHR data for predictive modeling presents several challenges due to its unique characteristics.
Deep learning has demonstrated its superiority in various applications, including healthcare.
arXiv Detail & Related papers (2024-02-02T00:31:01Z) - Explain To Decide: A Human-Centric Review on the Role of Explainable
Artificial Intelligence in AI-assisted Decision Making [1.0878040851638]
Machine learning models are error-prone and cannot be used autonomously.
Explainable Artificial Intelligence (XAI) aids end-user understanding of the model.
This paper surveyed the recent empirical studies on XAI's impact on human-AI decision-making.
arXiv Detail & Related papers (2023-12-11T22:35:21Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - Looking deeper into interpretable deep learning in neuroimaging: a
comprehensive survey [20.373311465258393]
This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain.
We discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions.
arXiv Detail & Related papers (2023-07-14T04:50:04Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Assessing the Performance of Automated Prediction and Ranking of Patient
Age from Chest X-rays Against Clinicians [4.795478287106675]
Deep learning has been demonstrated to allow the accurate estimation of patient age from chest X-rays.
We present a novel comparative study of the performance of radiologists versus state-of-the-art deep learning models.
We train our models with a heterogeneous database of 1.8M chest X-rays with ground truth patient ages and investigate the limitations on model accuracy.
arXiv Detail & Related papers (2022-07-04T10:09:48Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Global and Local Interpretation of black-box Machine Learning models to
determine prognostic factors from early COVID-19 data [0.0]
We analyze COVID-19 blood work data with some of the popular machine learning models.
We employ state-of-the-art post-hoc local interpretability techniques and symbolic metamodeling to draw interpretable conclusions.
We explore one of the most recent techniques called symbolic metamodeling to find the mathematical expression of the machine learning models for COVID-19.
arXiv Detail & Related papers (2021-09-10T20:00:47Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.