Embedding Task Knowledge into 3D Neural Networks via Self-supervised
Learning
- URL: http://arxiv.org/abs/2006.05798v1
- Date: Wed, 10 Jun 2020 12:37:39 GMT
- Title: Embedding Task Knowledge into 3D Neural Networks via Self-supervised
Learning
- Authors: Jiuwen Zhu, Yuexiang Li, Yifan Hu, S. Kevin Zhou
- Abstract summary: Self-supervised learning (SSL) is a potential solution for deficient annotated data.
We propose a novel SSL approach for 3D medical image classification, namely Task-related Contrastive Prediction Coding ( TCPC)
TCPC embeds task knowledge into training 3D neural networks.
- Score: 21.902313057142905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning highly relies on the amount of annotated data. However,
annotating medical images is extremely laborious and expensive. To this end,
self-supervised learning (SSL), as a potential solution for deficient annotated
data, attracts increasing attentions from the community. However, SSL
approaches often design a proxy task that is not necessarily related to target
task. In this paper, we propose a novel SSL approach for 3D medical image
classification, namely Task-related Contrastive Prediction Coding (TCPC), which
embeds task knowledge into training 3D neural networks. The proposed TCPC first
locates the initial candidate lesions via supervoxel estimation using simple
linear iterative clustering. Then, we extract features from the sub-volume
cropped around potential lesion areas, and construct a calibrated contrastive
predictive coding scheme for self-supervised learning. Extensive experiments
are conducted on public and private datasets. The experimental results
demonstrate the effectiveness of embedding lesion-related prior-knowledge into
neural networks for 3D medical image classification.
Related papers
- Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Self-supervised learning via inter-modal reconstruction and feature
projection networks for label-efficient 3D-to-2D segmentation [4.5206601127476445]
We propose a novel convolutional neural network (CNN) and self-supervised learning (SSL) method for label-efficient 3D-to-2D segmentation.
Results on different datasets demonstrate that the proposed CNN significantly improves the state of the art in scenarios with limited labeled data by up to 8% in Dice score.
arXiv Detail & Related papers (2023-07-06T14:16:25Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - Slice-level Detection of Intracranial Hemorrhage on CT Using Deep
Descriptors of Adjacent Slices [0.31317409221921133]
We propose a new strategy to train emphslice-level classifiers on CT scans based on the descriptors of the adjacent slices along the axis.
We obtain a single model in the top 4% best-performing solutions of the RSNA Intracranial Hemorrhage dataset challenge.
The proposed method is general and can be applied to other 3D medical diagnosis tasks such as MRI imaging.
arXiv Detail & Related papers (2022-08-05T23:20:37Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - Contrastive Learning with Continuous Proxy Meta-Data for 3D MRI
Classification [1.714108629548376]
We propose to leverage continuous proxy metadata, in the contrastive learning framework, by introducing a new loss called y-Aware InfoNCE loss.
A 3D CNN model pre-trained on $104$ multi-site healthy brain MRI scans can extract relevant features for three classification tasks.
When fine-tuned, it also outperforms 3D CNN trained from scratch on these tasks, as well as state-of-the-art self-supervised methods.
arXiv Detail & Related papers (2021-06-16T14:17:04Z) - SAR: Scale-Aware Restoration Learning for 3D Tumor Segmentation [23.384259038420005]
We propose Scale-Aware Restoration (SAR) for 3D tumor segmentation.
A novel proxy task, i.e. scale discrimination, is formulated to pre-train the 3D neural network combined with the self-restoration task.
We demonstrate the effectiveness of our methods on two downstream tasks: i.e. Brain tumor segmentation, ii. Pancreas tumor segmentation.
arXiv Detail & Related papers (2020-10-13T01:23:17Z) - Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise
Transformation for 3D Medical Image Segmentation [27.84323872782403]
We propose a novel self-supervised learning framework for volumetric medical images.
Specifically, we propose a context restoration task, i.e., Rubik's cube++, to pre-train 3D neural networks.
Compared to the strategy of training from scratch, fine-tuning from the Rubik's cube++ pre-trained weight can achieve better performance.
arXiv Detail & Related papers (2020-07-17T08:53:53Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.