Improving Endoscopic Decision Support Systems by Translating Between
Imaging Modalities
- URL: http://arxiv.org/abs/2004.12604v1
- Date: Mon, 27 Apr 2020 06:55:56 GMT
- Title: Improving Endoscopic Decision Support Systems by Translating Between
Imaging Modalities
- Authors: Georg Wimmer, Michael Gadermayr, Andreas V\'ecsei, Andreas Uhl
- Abstract summary: We investigate the applicability of image-to-image translation to endoscopic images showing different imaging modalities.
In a study on computer-aided celiac disease diagnosis, we explore whether image-to-image translation is capable of effectively performing the translation between the domains.
- Score: 4.760079434948197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel imaging technologies raise many questions concerning the adaptation of
computer-aided decision support systems. Classification models either need to
be adapted or even newly trained from scratch to exploit the full potential of
enhanced techniques. Both options typically require the acquisition of new
labeled training data. In this work we investigate the applicability of
image-to-image translation to endoscopic images showing different imaging
modalities, namely conventional white-light and narrow-band imaging. In a study
on computer-aided celiac disease diagnosis, we explore whether image-to-image
translation is capable of effectively performing the translation between the
domains. We investigate if models can be trained on virtual (or a mixture of
virtual and real) samples to improve overall accuracy in a setting with limited
labeled training data. Finally, we also ask whether a translation of testing
images to another domain is capable of improving accuracy by exploiting the
enhanced imaging characteristics.
Related papers
- From Real Artifacts to Virtual Reference: A Robust Framework for Translating Endoscopic Images [27.230439605570812]
In endoscopic imaging, combining pre-operative data with intra-operative imaging is important for surgical planning and navigation.
Existing domain adaptation methods are hampered by distribution shift caused by in vivo artifacts.
This paper presents an artifact-resilient image translation method and an associated benchmark for this purpose.
arXiv Detail & Related papers (2024-10-15T02:41:52Z) - Disease Classification and Impact of Pretrained Deep Convolution Neural Networks on Diverse Medical Imaging Datasets across Imaging Modalities [0.0]
This paper investigates the intricacies of using pretrained deep convolutional neural networks with transfer learning across diverse medical imaging datasets.
It shows that the use of pretrained models as fixed feature extractors yields poor performance irrespective of the datasets.
It is also found that deeper and more complex architectures did not necessarily result in the best performance.
arXiv Detail & Related papers (2024-08-30T04:51:19Z) - Image Class Translation Distance: A Novel Interpretable Feature for Image Classification [0.0]
We propose a novel application of image translation networks for image classification.
We train a network to translate images between possible classes, and then quantify translation distance.
These translation distances can then be examined for clusters and trends, and can be fed directly to a simple classifier.
We demonstrate the approach on a toy 2-class scenario, apples versus oranges, and then apply it to two medical imaging tasks.
arXiv Detail & Related papers (2024-08-16T18:48:28Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Hospital-Agnostic Image Representation Learning in Digital Pathology [0.7412445894287709]
Whole Slide Images (WSIs) in digital pathology are used to diagnose cancer subtypes.
The difference in procedures to acquire WSIs at various trial sites gives rise to variability in the histopathology images.
A domain generalization technique is leveraged in this study to improve the generalization capability of a Deep Neural Network (DNN)
arXiv Detail & Related papers (2022-04-05T11:45:46Z) - Colorectal Polyp Classification from White-light Colonoscopy Images via
Domain Alignment [57.419727894848485]
A computer-aided diagnosis system is required to assist accurate diagnosis from colonoscopy images.
Most previous studies at-tempt to develop models for polyp differentiation using Narrow-Band Imaging (NBI) or other enhanced images.
We propose a novel framework based on a teacher-student architecture for the accurate colorectal polyp classification.
arXiv Detail & Related papers (2021-08-05T09:31:46Z) - Self-Supervised Domain Adaptation for Diabetic Retinopathy Grading using
Vessel Image Reconstruction [61.58601145792065]
We learn invariant target-domain features by defining a novel self-supervised task based on retinal vessel image reconstructions.
It can be shown that our approach outperforms existing domain strategies.
arXiv Detail & Related papers (2021-07-20T09:44:07Z) - Continual Active Learning for Efficient Adaptation of Machine Learning
Models to Changing Image Acquisition [3.205205037629335]
We propose a method for continual active learning on a data stream of medical images.
It recognizes shifts or additions of new imaging sources - domains - and adapts training accordingly.
Results demonstrate that the proposed method outperforms naive active learning while requiring less manual labelling.
arXiv Detail & Related papers (2021-06-07T05:39:06Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.