Universal Model for Multi-Domain Medical Image Retrieval
- URL: http://arxiv.org/abs/2007.08628v1
- Date: Tue, 14 Jul 2020 23:22:04 GMT
- Title: Universal Model for Multi-Domain Medical Image Retrieval
- Authors: Yang Feng, Yubao Liu, Jiebo Luo
- Abstract summary: Medical Image Retrieval (MIR) helps doctors quickly find similar patients' data.
MIR is becoming increasingly helpful due to the wide use of digital imaging modalities.
However, the popularity of various digital imaging modalities in hospitals also poses several challenges to MIR.
- Score: 88.67940265012638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical Image Retrieval (MIR) helps doctors quickly find similar patients'
data, which can considerably aid the diagnosis process. MIR is becoming
increasingly helpful due to the wide use of digital imaging modalities and the
growth of the medical image repositories. However, the popularity of various
digital imaging modalities in hospitals also poses several challenges to MIR.
Usually, one image retrieval model is only trained to handle images from one
modality or one source. When there are needs to retrieve medical images from
several sources or domains, multiple retrieval models need to be maintained,
which is cost ineffective. In this paper, we study an important but unexplored
task: how to train one MIR model that is applicable to medical images from
multiple domains? Simply fusing the training data from multiple domains cannot
solve this problem because some domains become over-fit sooner when trained
together using existing methods. Therefore, we propose to distill the knowledge
in multiple specialist MIR models into a single multi-domain MIR model via
universal embedding to solve this problem. Using skin disease, x-ray, and
retina image datasets, we validate that our proposed universal model can
effectively accomplish multi-domain MIR.
Related papers
- Multi-domain improves out-of-distribution and data-limited scenarios for medical image analysis [2.315156126698557]
We show that employing models that incorporate multiple domains instead of specialized ones significantly alleviates the limitations observed in specialized models.
For organ recognition, multi-domain model can enhance accuracy by up to 8% compared to conventional specialized models.
arXiv Detail & Related papers (2023-10-10T16:07:23Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Fusion of medical imaging and electronic health records with attention
and multi-head machanisms [4.433829714749366]
We propose a multi-modal attention module which use EHR data to help the selection of important regions during image feature extraction process.
We also propose to incorporate multi-head machnism to gated multimodal unit (GMU) to make it able to parallelly fuse image and EHR features in different subspaces.
Experiments on predicting Glasgow outcome scale (GOS) of intracerebral hemorrhage patients and classifying Alzheimer's Disease showed the proposed method can automatically focus on task-related areas.
arXiv Detail & Related papers (2021-12-22T07:39:26Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Studying Robustness of Semantic Segmentation under Domain Shift in
cardiac MRI [0.8858288982748155]
We study challenges and opportunities of domain transfer across images from multiple clinical centres and scanner vendors.
In this work, we build upon a fixed U-Net architecture configured by the nnU-net framework to investigate various data augmentation techniques and batch normalization layers.
arXiv Detail & Related papers (2020-11-15T17:50:23Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Robust Image Reconstruction with Misaligned Structural Information [0.27074235008521236]
We propose a variational framework which jointly performs reconstruction and registration.
Our approach is the first to achieve this for different modalities and outranks established approaches in terms of accuracy of both reconstruction and registration.
arXiv Detail & Related papers (2020-04-01T17:21:25Z) - Unifying Specialist Image Embedding into Universal Image Embedding [84.0039266370785]
It is desirable to have a universal deep embedding model applicable to various domains of images.
We propose to distill the knowledge in multiple specialists into a universal embedding to solve this problem.
arXiv Detail & Related papers (2020-03-08T02:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.