Evaluation of Various Open-Set Medical Imaging Tasks with Deep Neural
Networks
- URL: http://arxiv.org/abs/2110.10888v1
- Date: Thu, 21 Oct 2021 04:19:41 GMT
- Title: Evaluation of Various Open-Set Medical Imaging Tasks with Deep Neural
Networks
- Authors: Zongyuan Ge, Xin Wang
- Abstract summary: We conduct rigorous evaluations amongst state-of-the-art open-set methods, exploring different open-set scenarios.
We show the main difference between general domain-trained and medical domain-trained open-set models.
- Score: 15.655519786176438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current generation of deep neural networks has achieved close-to-human
results on "closed-set" image recognition; that is, the classes being evaluated
overlap with the training classes. Many recent methods attempt to address the
importance of the unknown, which are termed "open-set" recognition algorithms,
try to reject unknown classes as well as maintain high recognition accuracy on
known classes. However, it is still unclear how different general
domain-trained open-set methods from ImageNet would perform on a different but
more specific domain, such as the medical domain. Without principled and formal
evaluations to measure the effectiveness of those general open-set methods,
artificial intelligence (AI)-based medical diagnostics would experience
ineffective adoption and increased risks of bad decision making. In this paper,
we conduct rigorous evaluations amongst state-of-the-art open-set methods,
exploring different open-set scenarios from "similar-domain" to
"different-domain" scenarios and comparing them on various general and medical
domain datasets. We summarise the results and core ideas and explain how the
models react to various degrees of openness and different distributions of open
classes. We show the main difference between general domain-trained and medical
domain-trained open-set models with our quantitative and qualitative analysis
of the results. We also identify aspects of model robustness in real clinical
workflow usage according to confidence calibration and the inference
efficiency.
Related papers
- A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis [48.84443450990355]
Deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexcepted situations.
We investigate this challenge and focus on model sensitivity to domain shifts, such as data sampled from different hospitals or data confounded by demographic variables such as sex, race, etc, in the context of chest X-rays and skin lesion images.
Taking inspiration from medical training, we propose giving deep networks a prior grounded in explicit medical knowledge communicated in natural language.
arXiv Detail & Related papers (2024-05-23T17:55:02Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust
and Non-Robust Models [0.3425341633647624]
This paper focuses on evaluating methods of attribution mapping to find whether robust neural networks are more explainable.
We propose a new explainability faithfulness metric (called EvalAttAI) that addresses the limitations of prior metrics.
arXiv Detail & Related papers (2023-03-15T18:33:22Z) - Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing
Methods [9.152759278163954]
This work presents two novel intra-processing techniques based on fine-tuning and pruning an already-trained neural network.
To the best of our knowledge, this is one of the first efforts studying debiasing methods on chest radiographs.
arXiv Detail & Related papers (2022-07-26T10:18:59Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Mutual Information-based Disentangled Neural Networks for Classifying
Unseen Categories in Different Domains: Application to Fetal Ultrasound
Imaging [10.504733425082335]
Deep neural networks exhibit limited generalizability across images with different entangled domain features and categorical features.
We propose Mutual Information-based Disentangled Neural Networks (MIDNet), which extract generalizable categorical features to transfer knowledge to unseen categories in a target domain.
We extensively evaluate the proposed method on fetal ultrasound datasets for two different image classification tasks.
arXiv Detail & Related papers (2020-10-30T17:32:18Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.