Multi-view Contrastive Learning with Additive Margin for Adaptive
Nasopharyngeal Carcinoma Radiotherapy Prediction
- URL: http://arxiv.org/abs/2210.15201v1
- Date: Thu, 27 Oct 2022 06:21:24 GMT
- Title: Multi-view Contrastive Learning with Additive Margin for Adaptive
Nasopharyngeal Carcinoma Radiotherapy Prediction
- Authors: Jiabao Sheng, Yuanpeng Zhang, Jing Cai, Sai-Kit Lam, Zhe Li, Jiang
Zhang, Xinzhi Teng
- Abstract summary: We propose a supervised multi-view contrastive learning method with an additive margin.
For each patient, four medical images are considered to form multi-view positive pairs.
In addition, the embedding space is learned by means of contrastive learning.
- Score: 7.303184467211488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prediction of adaptive radiation therapy (ART) prior to radiation therapy
(RT) for nasopharyngeal carcinoma (NPC) patients is important to reduce
toxicity and prolong the survival of patients. Currently, due to the complex
tumor micro-environment, a single type of high-resolution image can provide
only limited information. Meanwhile, the traditional softmax-based loss is
insufficient for quantifying the discriminative power of a model. To overcome
these challenges, we propose a supervised multi-view contrastive learning
method with an additive margin (MMCon). For each patient, four medical images
are considered to form multi-view positive pairs, which can provide additional
information and enhance the representation of medical images. In addition, the
embedding space is learned by means of contrastive learning. NPC samples from
the same patient or with similar labels will remain close in the embedding
space, while NPC samples with different labels will be far apart. To improve
the discriminative ability of the loss function, we incorporate a margin into
the contrastive learning. Experimental result show this new learning objective
can be used to find an embedding space that exhibits superior discrimination
ability for NPC images.
Related papers
- Boosting Medical Image-based Cancer Detection via Text-guided Supervision from Reports [68.39938936308023]
We propose a novel text-guided learning method to achieve highly accurate cancer detection results.
Our approach can leverage clinical knowledge by large-scale pre-trained VLM to enhance generalization ability.
arXiv Detail & Related papers (2024-05-23T07:03:38Z) - Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models [2.8981737432963506]
We propose a latent diffusion model capable of acquisition time-conditioned image synthesis of DCE-MRI temporal sequences.
Our results demonstrate our method's ability to generate realistic multi-sequence fat-saturated breast DCE-MRI.
arXiv Detail & Related papers (2024-03-20T18:01:57Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Counterfactual Image Synthesis for Discovery of Personalized Predictive
Image Markers [0.293168019422713]
We show how a deep conditional generative model can be used to perturb local imaging features in baseline images that are pertinent to subject-specific future disease evolution.
Our model produces counterfactuals with changes in imaging features that reflect established clinical markers predictive of future MRI lesional activity at the population level.
arXiv Detail & Related papers (2022-08-03T18:58:45Z) - Brain Cancer Survival Prediction on Treatment-na ive MRI using Deep
Anchor Attention Learning with Vision Transformer [4.630654643366308]
Image-based brain cancer prediction models quantify the radiologic phenotype from magnetic resonance imaging (MRI)
Despite evidence of intra-tumor phenotypic heterogeneity, the spatial diversity between different slices within an MRI scan has been relatively unexplored using such methods.
We propose a deep anchor attention aggregation strategy with a Vision Transformer to predict survival risk for brain cancer patients.
arXiv Detail & Related papers (2022-02-03T21:33:08Z) - Predicting Patient Readmission Risk from Medical Text via Knowledge
Graph Enhanced Multiview Graph Convolution [67.72545656557858]
We propose a new method that uses medical text of Electronic Health Records for prediction.
We represent discharge summaries of patients with multiview graphs enhanced by an external knowledge graph.
Experimental results prove the effectiveness of our method, yielding state-of-the-art performance.
arXiv Detail & Related papers (2021-12-19T01:45:57Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Potential Features of ICU Admission in X-ray Images of COVID-19 Patients [8.83608410540057]
This paper presents an original methodology for extracting semantic features that correlate to severity from a data set with patient ICU admission labels.
The methodology employs a neural network trained to recognise lung pathologies to extract the semantic features.
The method has shown to be capable of selecting images for the learned features, which could translate some information about their common locations in the lung.
arXiv Detail & Related papers (2020-09-26T13:48:39Z) - Robust Pancreatic Ductal Adenocarcinoma Segmentation with
Multi-Institutional Multi-Phase Partially-Annotated CT Scans [25.889684822655255]
Pancreatic ductal adenocarcinoma (PDAC) segmentation is one of the most challenging tumor segmentation tasks.
Based on a new self-learning framework, we propose to train the PDAC segmentation model using a much larger quantity of patients.
Experiment results show that our proposed method provides an absolute improvement of 6.3% Dice score over the strong baseline of nnUNet trained on annotated images.
arXiv Detail & Related papers (2020-08-24T18:50:30Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.