Leveraging multi-view data without annotations for prostate MRI
segmentation: A contrastive approach
- URL: http://arxiv.org/abs/2308.06477v2
- Date: Fri, 15 Sep 2023 09:46:14 GMT
- Title: Leveraging multi-view data without annotations for prostate MRI
segmentation: A contrastive approach
- Authors: Tim Nikolass Lindeijer, Tord Martin Ytredal, Trygve Eftest{\o}l,
Tobias Nordstr\"om, Fredrik J\"aderling, Martin Eklund and Alvaro
Fernandez-Quilez
- Abstract summary: We propose a triplet encoder and single decoder network based on U-Net, tU-Net (triplet U-Net)
Our proposed architecture is able to exploit non-annotated sagittal and coronal views via contrastive learning to improve the segmentation from a volumetric perspective.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An accurate prostate delineation and volume characterization can support the
clinical assessment of prostate cancer. A large amount of automatic prostate
segmentation tools consider exclusively the axial MRI direction in spite of the
availability as per acquisition protocols of multi-view data. Further, when
multi-view data is exploited, manual annotations and availability at test time
for all the views is commonly assumed. In this work, we explore a contrastive
approach at training time to leverage multi-view data without annotations and
provide flexibility at deployment time in the event of missing views. We
propose a triplet encoder and single decoder network based on U-Net, tU-Net
(triplet U-Net). Our proposed architecture is able to exploit non-annotated
sagittal and coronal views via contrastive learning to improve the segmentation
from a volumetric perspective. For that purpose, we introduce the concept of
inter-view similarity in the latent space. To guide the training, we combine a
dice score loss calculated with respect to the axial view and its manual
annotations together with a multi-view contrastive loss. tU-Net shows
statistical improvement in dice score coefficient (DSC) with respect to only
axial view (91.25+-0.52% compared to 86.40+-1.50%,P<.001). Sensitivity analysis
reveals the volumetric positive impact of the contrastive loss when paired with
tU-Net (2.85+-1.34% compared to 3.81+-1.88%,P<.001). Further, our approach
shows good external volumetric generalization in an in-house dataset when
tested with multi-view data (2.76+-1.89% compared to 3.92+-3.31%,P=.002),
showing the feasibility of exploiting non-annotated multi-view data through
contrastive learning whilst providing flexibility at deployment in the event of
missing views.
Related papers
- Affinity-Graph-Guided Contractive Learning for Pretext-Free Medical Image Segmentation with Minimal Annotation [55.325956390997]
This paper proposes an affinity-graph-guided semi-supervised contrastive learning framework (Semi-AGCL) for medical image segmentation.
The framework first designs an average-patch-entropy-driven inter-patch sampling method, which can provide a robust initial feature space.
With merely 10% of the complete annotation set, our model approaches the accuracy of the fully annotated baseline, manifesting a marginal deviation of only 2.52%.
arXiv Detail & Related papers (2024-10-14T10:44:47Z) - Symmetric Graph Contrastive Learning against Noisy Views for Recommendation [7.92181856602497]
We introduce symmetry theory into graph contrastive learning, based on which we propose a symmetric form and contrast loss resistant to noisy interference.
Our approach substantially increases recommendation accuracy, with relative improvements reaching as high as 12.25% over nine other competing models.
arXiv Detail & Related papers (2024-08-03T06:58:07Z) - Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound [17.91546880972773]
We propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL)
AWCL incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner.
Experiments on a large-scale fetal ultrasound dataset demonstrate that our approach is effective for learning representations that transfer well to three clinical downstream tasks.
arXiv Detail & Related papers (2022-08-22T22:49:26Z) - Towards Explainable End-to-End Prostate Cancer Relapse Prediction from
H&E Images Combining Self-Attention Multiple Instance Learning with a
Recurrent Neural Network [0.0]
We propose an explainable cancer relapse prediction network (eCaReNet) and show that end-to-end learning without strong annotations offers state-of-the-art performance.
Our model is well-calibrated and outputs survival curves as well as a risk score and group per patient.
arXiv Detail & Related papers (2021-11-26T11:45:08Z) - Latent Correlation-Based Multiview Learning and Self-Supervision: A
Unifying Perspective [41.80156041871873]
This work puts forth a theory-backed framework for unsupervised multiview learning.
Our development starts with proposing a multiview model, where each view is a nonlinear mixture of shared and private components.
In addition, the private information in each view can be provably disentangled from the shared using proper regularization design.
arXiv Detail & Related papers (2021-06-14T00:12:36Z) - Unsupervised Learning on Monocular Videos for 3D Human Pose Estimation [121.5383855764944]
We use contrastive self-supervised learning to extract rich latent vectors from single-view videos.
We show that applying CSS only to the time-variant features, while also reconstructing the input and encouraging a gradual transition between nearby and away features, yields a rich latent space.
Our approach outperforms other unsupervised single-view methods and matches the performance of multi-view techniques.
arXiv Detail & Related papers (2020-12-02T20:27:35Z) - Anonymization of labeled TOF-MRA images for brain vessel segmentation
using generative adversarial networks [0.9854633436173144]
Generative adversarial networks (GANs) have the potential to provide anonymous images while preserving predictive properties.
We trained 3 GANs on time-of-flight (TOF) magnetic resonance angiography (MRA) patches for image-label generation.
The generated image-labels from each GAN were used to train a U-net for segmentation and tested on real data.
arXiv Detail & Related papers (2020-09-09T11:30:58Z) - User-Guided Domain Adaptation for Rapid Annotation from User
Interactions: A Study on Pathological Liver Segmentation [49.96706092808873]
Mask-based annotation of medical images, especially for 3D data, is a bottleneck in developing reliable machine learning models.
We propose the user-guided domain adaptation (UGDA) framework, which uses prediction-based adversarial domain adaptation (PADA) to model the combined distribution of UIs and mask predictions.
We show UGDA can retain this state-of-the-art performance even when only seeing a fraction of available UIs.
arXiv Detail & Related papers (2020-09-05T04:24:58Z) - Semi-Automatic Data Annotation guided by Feature Space Projection [117.9296191012968]
We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation.
We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities.
Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.
arXiv Detail & Related papers (2020-07-27T17:03:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.