Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise
Transformation for 3D Medical Image Segmentation
- URL: http://arxiv.org/abs/2007.08826v1
- Date: Fri, 17 Jul 2020 08:53:53 GMT
- Title: Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise
Transformation for 3D Medical Image Segmentation
- Authors: Xing Tao, Yuexiang Li, Wenhui Zhou, Kai Ma, Yefeng Zheng
- Abstract summary: We propose a novel self-supervised learning framework for volumetric medical images.
Specifically, we propose a context restoration task, i.e., Rubik's cube++, to pre-train 3D neural networks.
Compared to the strategy of training from scratch, fine-tuning from the Rubik's cube++ pre-trained weight can achieve better performance.
- Score: 27.84323872782403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning highly relies on the quantity of annotated data. However, the
annotations for 3D volumetric medical data require experienced physicians to
spend hours or even days for investigation. Self-supervised learning is a
potential solution to get rid of the strong requirement of training data by
deeply exploiting raw data information. In this paper, we propose a novel
self-supervised learning framework for volumetric medical images. Specifically,
we propose a context restoration task, i.e., Rubik's cube++, to pre-train 3D
neural networks. Different from the existing context-restoration-based
approaches, we adopt a volume-wise transformation for context permutation,
which encourages network to better exploit the inherent 3D anatomical
information of organs. Compared to the strategy of training from scratch,
fine-tuning from the Rubik's cube++ pre-trained weight can achieve better
performance in various tasks such as pancreas segmentation and brain tissue
segmentation. The experimental results show that our self-supervised learning
method can significantly improve the accuracy of 3D deep learning networks on
volumetric medical datasets without the use of extra data.
Related papers
- Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Transfer learning from a sparsely annotated dataset of 3D medical images [4.477071833136902]
This study explores the use of transfer learning to improve the performance of deep convolutional neural networks for organ segmentation in medical imaging.
A base segmentation model was trained on a large and sparsely annotated dataset; its weights were used for transfer learning on four new down-stream segmentation tasks.
The results showed that transfer learning from the base model was beneficial when small datasets were available.
arXiv Detail & Related papers (2023-11-08T21:31:02Z) - BYOLMed3D: Self-Supervised Representation Learning of Medical Videos
using Gradient Accumulation Assisted 3D BYOL Framework [0.0]
Supervised learning algorithms require a large volumes of balanced data to learn robust representations.
Self-supervised learning algorithms are robust to imbalance in the data and are capable of learning robust representations.
We train a 3D BYOL self-supervised model using gradient accumulation technique to deal with the large number of samples in a batch generally required in a self-supervised algorithm.
arXiv Detail & Related papers (2022-07-31T14:48:06Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Contrastive Learning with Continuous Proxy Meta-Data for 3D MRI
Classification [1.714108629548376]
We propose to leverage continuous proxy metadata, in the contrastive learning framework, by introducing a new loss called y-Aware InfoNCE loss.
A 3D CNN model pre-trained on $104$ multi-site healthy brain MRI scans can extract relevant features for three classification tasks.
When fine-tuned, it also outperforms 3D CNN trained from scratch on these tasks, as well as state-of-the-art self-supervised methods.
arXiv Detail & Related papers (2021-06-16T14:17:04Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - PointContrast: Unsupervised Pre-training for 3D Point Cloud
Understanding [107.02479689909164]
In this work, we aim at facilitating research on 3D representation learning.
We measure the effect of unsupervised pre-training on a large source set of 3D scenes.
arXiv Detail & Related papers (2020-07-21T17:59:22Z) - Embedding Task Knowledge into 3D Neural Networks via Self-supervised
Learning [21.902313057142905]
Self-supervised learning (SSL) is a potential solution for deficient annotated data.
We propose a novel SSL approach for 3D medical image classification, namely Task-related Contrastive Prediction Coding ( TCPC)
TCPC embeds task knowledge into training 3D neural networks.
arXiv Detail & Related papers (2020-06-10T12:37:39Z) - 3D Self-Supervised Methods for Medical Imaging [7.65168530693281]
We propose 3D versions for five different self-supervised methods, in the form of proxy tasks.
Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation.
The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks.
arXiv Detail & Related papers (2020-06-06T09:56:58Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.