Bridging the gap between Natural and Medical Images through Deep
Colorization
- URL: http://arxiv.org/abs/2005.10589v2
- Date: Mon, 19 Oct 2020 21:47:58 GMT
- Title: Bridging the gap between Natural and Medical Images through Deep
Colorization
- Authors: Lia Morra, Luca Piano, Fabrizio Lamberti, Tatiana Tommasi
- Abstract summary: Transfer learning from natural image collections is a standard practice that attempts to tackle shape, texture and color discrepancies.
In this work, we propose to disentangle those challenges and design a dedicated network module that focuses on color adaptation.
We combine learning from scratch of the color module with transfer learning of different classification backbones, obtaining an end-to-end, easy-to-train architecture for diagnostic image recognition.
- Score: 15.585095421320922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has thrived by training on large-scale datasets. However, in
many applications, as for medical image diagnosis, getting massive amount of
data is still prohibitive due to privacy, lack of acquisition homogeneity and
annotation cost. In this scenario, transfer learning from natural image
collections is a standard practice that attempts to tackle shape, texture and
color discrepancies all at once through pretrained model fine-tuning. In this
work, we propose to disentangle those challenges and design a dedicated network
module that focuses on color adaptation. We combine learning from scratch of
the color module with transfer learning of different classification backbones,
obtaining an end-to-end, easy-to-train architecture for diagnostic image
recognition on X-ray images. Extensive experiments showed how our approach is
particularly efficient in case of data scarcity and provides a new path for
further transferring the learned color information across multiple medical
datasets.
Related papers
- Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Connecting the Dots: Graph Neural Network Powered Ensemble and
Classification of Medical Images [0.0]
Deep learning for medical imaging is limited due to the requirement for large amounts of training data.
We employ the Image Foresting Transform to optimally segment images into superpixels.
These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships.
arXiv Detail & Related papers (2023-11-13T13:20:54Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Metadata-enhanced contrastive learning from retinal optical coherence tomography images [7.932410831191909]
We extend conventional contrastive frameworks with a novel metadata-enhanced strategy.
Our approach employs widely available patient metadata to approximate the true set of inter-image contrastive relationships.
Our approach outperforms both standard contrastive methods and a retinal image foundation model in five out of six image-level downstream tasks.
arXiv Detail & Related papers (2022-08-04T08:53:15Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Federated Learning for Computational Pathology on Gigapixel Whole Slide
Images [4.035591045544291]
We introduce privacy-preserving federated learning for gigapixel whole slide images in computational pathology.
We evaluate our approach on two different diagnostic problems using thousands of histology whole slide images with only slide-level labels.
arXiv Detail & Related papers (2020-09-21T21:56:08Z) - Stain Style Transfer of Histopathology Images Via Structure-Preserved
Generative Learning [31.254432319814864]
This study proposes two stain style transfer models, SSIM-GAN and DSCSI-GAN, based on the generative adversarial networks.
By cooperating structural preservation metrics and feedback of an auxiliary diagnosis net in learning, medical-relevant information is preserved in color-normalized images.
arXiv Detail & Related papers (2020-07-24T15:30:19Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.