A Novel Collaborative Self-Supervised Learning Method for Radiomic Data
- URL: http://arxiv.org/abs/2302.09807v1
- Date: Mon, 20 Feb 2023 07:15:28 GMT
- Title: A Novel Collaborative Self-Supervised Learning Method for Radiomic Data
- Authors: Zhiyuan Li, Hailong Li, Anca L. Ralescu, Jonathan R. Dillman, Nehal A.
Parikh, and Lili He
- Abstract summary: We present the first novel collaborative self-supervised learning method to solve the challenge of insufficient labeled radiomic data.
Our method collaboratively learns the robust latent feature representations from radiomic data in a self-supervised manner to reduce human annotation efforts.
- Score: 3.5213632537596604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The computer-aided disease diagnosis from radiomic data is important in many
medical applications. However, developing such a technique relies on annotating
radiological images, which is a time-consuming, labor-intensive, and expensive
process. In this work, we present the first novel collaborative self-supervised
learning method to solve the challenge of insufficient labeled radiomic data,
whose characteristics are different from text and image data. To achieve this,
we present two collaborative pretext tasks that explore the latent pathological
or biological relationships between regions of interest and the similarity and
dissimilarity information between subjects. Our method collaboratively learns
the robust latent feature representations from radiomic data in a
self-supervised manner to reduce human annotation efforts, which benefits the
disease diagnosis. We compared our proposed method with other state-of-the-art
self-supervised learning methods on a simulation study and two independent
datasets. Extensive experimental results demonstrated that our method
outperforms other self-supervised learning methods on both classification and
regression tasks. With further refinement, our method shows the potential
advantage in automatic disease diagnosis with large-scale unlabeled data
available.
Related papers
- Joint Self-Supervised and Supervised Contrastive Learning for Multimodal
MRI Data: Towards Predicting Abnormal Neurodevelopment [5.771221868064265]
We present a novel joint self-supervised and supervised contrastive learning method to learn the robust latent feature representation from multimodal MRI data.
Our method has the capability to facilitate computer-aided diagnosis within clinical practice, harnessing the power of multimodal data.
arXiv Detail & Related papers (2023-12-22T21:05:51Z) - Two-stage Joint Transductive and Inductive learning for Nuclei
Segmentation [3.138395828947902]
We propose a novel approach to nuclei segmentation that leverages the available labelled and unlabelled data.
We evaluate our approach on MoNuSeg benchmark to demonstrate the efficacy and potential of our method.
arXiv Detail & Related papers (2023-11-15T08:37:11Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Efficient Symptom Inquiring and Diagnosis via Adaptive Alignment of
Reinforcement Learning and Classification [0.6415701940560564]
We first propose a novel method for medical automatic diagnosis with symptom inquiring and disease diagnosing formulated as a reinforcement learning task and a classification task, respectively.
We create a new dataset extracted from the MedlinePlus knowledge base that contains more diseases and more complete symptom information.
Experimental evaluation results show that our method outperforms three recent state-of-the-art methods on different datasets.
arXiv Detail & Related papers (2021-12-01T11:25:42Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - Handling Data Heterogeneity with Generative Replay in Collaborative
Learning for Medical Imaging [21.53220262343254]
We present a novel generative replay strategy to address the challenge of data heterogeneity in collaborative learning methods.
A primary model learns the desired task, and an auxiliary "generative replay model" either synthesizes images that closely resemble the input images or helps extract latent variables.
The generative replay strategy is flexible to use, can either be incorporated into existing collaborative learning methods to improve their capability of handling data heterogeneity across institutions, or be used as a novel and individual collaborative learning framework (termed FedReplay) to reduce communication cost.
arXiv Detail & Related papers (2021-06-24T17:39:55Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Self-supervised Feature Learning via Exploiting Multi-modal Data for
Retinal Disease Diagnosis [28.428216831922228]
This paper presents a novel self-supervised feature learning method by effectively exploiting multi-modal data for retinal disease diagnosis.
Our objective learns both modality-invariant features and patient-similarity features.
We evaluate our method on two public benchmark datasets for retinal disease diagnosis.
arXiv Detail & Related papers (2020-07-21T19:49:45Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.