A Trustworthy Framework for Medical Image Analysis with Deep Learning
- URL: http://arxiv.org/abs/2212.02764v1
- Date: Tue, 6 Dec 2022 05:30:22 GMT
- Title: A Trustworthy Framework for Medical Image Analysis with Deep Learning
- Authors: Kai Ma, Siyuan He, Pengcheng Xi, Ashkan Ebadi, St\'ephane Tremblay,
Alexander Wong
- Abstract summary: TRUDLMIA is a trustworthy deep learning framework for medical image analysis.
It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
- Score: 71.48204494889505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer vision and machine learning are playing an increasingly important
role in computer-assisted diagnosis; however, the application of deep learning
to medical imaging has challenges in data availability and data imbalance, and
it is especially important that models for medical imaging are built to be
trustworthy. Therefore, we propose TRUDLMIA, a trustworthy deep learning
framework for medical image analysis, which adopts a modular design, leverages
self-supervised pre-training, and utilizes a novel surrogate loss function.
Experimental evaluations indicate that models generated from the framework are
both trustworthy and high-performing. It is anticipated that the framework will
support researchers and clinicians in advancing the use of deep learning for
dealing with public health crises including COVID-19.
Related papers
- Adversarial-Robust Transfer Learning for Medical Imaging via Domain
Assimilation [17.46080957271494]
The scarcity of publicly available medical images has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images.
A significant em domain discrepancy exists between natural and medical images, which causes AI models to exhibit heightened em vulnerability to adversarial attacks.
This paper proposes a em domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion.
arXiv Detail & Related papers (2024-02-25T06:39:15Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Performance Analysis of UNet and Variants for Medical Image Segmentation [1.5410557873153836]
This study aims to explore the application of deep learning models, particularly focusing on the UNet architecture and its variants, in medical image segmentation.
The findings reveal that the standard UNet, when extended with a deep network layer, is a proficient medical image segmentation model.
The Res-UNet and Attention Res-UNet architectures demonstrate smoother convergence and superior performance, particularly when handling fine image details.
arXiv Detail & Related papers (2023-09-22T17:20:40Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography [70.37371604119826]
Building AI models with trustworthiness is important especially in regulated areas such as healthcare.
Previous work uses convolutional neural networks as the backbone architecture, which has shown to be prone to over-caution and overconfidence in making decisions.
We propose a feature learning approach using Vision Transformers, which use an attention-based mechanism.
arXiv Detail & Related papers (2022-07-19T14:55:42Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Self-supervised learning methods and applications in medical imaging
analysis: A survey [0.0]
The article reviews the state-of-the-art research directions in self-supervised learning approaches for image data with concentration on their applications in the field of medical imaging analysis.
The article covers (40) of the most recent researches in the field of self-supervised learning in medical imaging analysis aiming at shedding the light on the recent innovation in the field.
arXiv Detail & Related papers (2021-09-17T17:01:42Z) - Medical Imaging and Machine Learning [16.240472115235253]
The National Institutes of Health in 2018 identified key focus areas for the future of artificial intelligence in medical imaging.
Data availability, need for novel computing architectures and explainable AI algorithms, are still relevant.
In this paper we explore challenges unique to high dimensional clinical imaging data, in addition to highlighting some of the technical and ethical considerations.
arXiv Detail & Related papers (2021-03-02T18:53:39Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.