Is in-domain data beneficial in transfer learning for landmarks
detection in x-ray images?
- URL: http://arxiv.org/abs/2403.01470v1
- Date: Sun, 3 Mar 2024 10:35:00 GMT
- Title: Is in-domain data beneficial in transfer learning for landmarks
detection in x-ray images?
- Authors: Roberto Di Via, Matteo Santacesaria, Francesca Odone, Vito Paolo
Pastore
- Abstract summary: We study whether the usage of small-scale in-domain x-ray image datasets may provide any improvement for landmark detection over models pre-trained on large natural image datasets only.
Our results show that using in-domain source datasets brings marginal or no benefit with respect to an ImageNet out-of-domain pre-training.
Our findings can provide an indication for the development of robust landmark detection systems in medical images when no large annotated dataset is available.
- Score: 1.5348047288817481
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, deep learning has emerged as a promising technique for
medical image analysis. However, this application domain is likely to suffer
from a limited availability of large public datasets and annotations. A common
solution to these challenges in deep learning is the usage of a transfer
learning framework, typically with a fine-tuning protocol, where a large-scale
source dataset is used to pre-train a model, further fine-tuned on the target
dataset. In this paper, we present a systematic study analyzing whether the
usage of small-scale in-domain x-ray image datasets may provide any improvement
for landmark detection over models pre-trained on large natural image datasets
only. We focus on the multi-landmark localization task for three datasets,
including chest, head, and hand x-ray images. Our results show that using
in-domain source datasets brings marginal or no benefit with respect to an
ImageNet out-of-domain pre-training. Our findings can provide an indication for
the development of robust landmark detection systems in medical images when no
large annotated dataset is available.
Related papers
- Self-supervised pre-training with diffusion model for few-shot landmark detection in x-ray images [0.8793721044482612]
This study introduces a novel application of denoising diffusion probabilistic models (DDPMs) to the landmark detection task.
Our key innovation lies in leveraging DDPMs for self-supervised pre-training in landmark detection.
This method enables accurate landmark detection with minimal annotated training data.
arXiv Detail & Related papers (2024-07-25T15:32:59Z) - MoCo-Transfer: Investigating out-of-distribution contrastive learning
for limited-data domains [52.612507614610244]
We analyze the benefit of transferring self-supervised contrastive representations from moment contrast (MoCo) pretraining to settings with limited data.
We find that depending on quantity of labeled and unlabeled data, contrastive pretraining on larger out-of-distribution datasets can perform nearly as well or better than MoCo pretraining in-domain.
arXiv Detail & Related papers (2023-11-15T21:56:47Z) - Exploring Self-Supervised Representation Learning For Low-Resource
Medical Image Analysis [2.458658951393896]
We investigate the applicability of self-supervised learning algorithms on small-scale medical imaging datasets.
In-domain low-resource SSL pre-training can yield competitive performance to transfer learning from large-scale datasets.
arXiv Detail & Related papers (2023-03-03T22:26:17Z) - RadTex: Learning Efficient Radiograph Representations from Text Reports [7.090896766922791]
We build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data.
Our model achieves higher classification performance than ImageNet-supervised pretraining when labeled training data is limited.
arXiv Detail & Related papers (2022-08-05T15:06:26Z) - Histopathology DatasetGAN: Synthesizing Large-Resolution Histopathology
Datasets [0.0]
Histopathology datasetGAN (HDGAN) is a framework for image generation and segmentation that scales well to large-resolution histopathology images.
We make several adaptations from the original framework, including updating the generative backbone, selectively extracting latent features from the generator, and switching to memory-mapped arrays.
We evaluate HDGAN on a thrombotic microangiopathy high-resolution tile dataset, demonstrating strong performance on the high-resolution image-annotation generation task.
arXiv Detail & Related papers (2022-07-06T14:33:50Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Damage detection using in-domain and cross-domain transfer learning [4.111375269316102]
We propose a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges.
We show that the combination of cross-domain and in-domain transfer persistently shows superior performance even with tiny datasets.
arXiv Detail & Related papers (2021-02-07T17:36:27Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.