Self-supervised TransUNet for Ultrasound regional segmentation of the
distal radius in children
- URL: http://arxiv.org/abs/2309.09490v1
- Date: Mon, 18 Sep 2023 05:23:33 GMT
- Title: Self-supervised TransUNet for Ultrasound regional segmentation of the
distal radius in children
- Authors: Yuyue Zhou, Jessica Knight, Banafshe Felfeliyan, Christopher Keen,
Abhilash Rakkunedeth Hareendranathan, Jacob L. Jaremko
- Abstract summary: Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
This paper investigates the feasibility of deploying the Masked Autoencoder for SSL (SSL-MAE) of TransUNet, for segmenting bony regions from children's wrist ultrasound scans.
- Score: 0.6291443816903801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised deep learning offers great promise to automate analysis of medical
images from segmentation to diagnosis. However, their performance highly relies
on the quality and quantity of the data annotation. Meanwhile, curating large
annotated datasets for medical images requires a high level of expertise, which
is time-consuming and expensive. Recently, to quench the thirst for large data
sets with high-quality annotation, self-supervised learning (SSL) methods using
unlabeled domain-specific data, have attracted attention. Therefore, designing
an SSL method that relies on minimal quantities of labeled data has
far-reaching significance in medical images. This paper investigates the
feasibility of deploying the Masked Autoencoder for SSL (SSL-MAE) of TransUNet,
for segmenting bony regions from children's wrist ultrasound scans. We found
that changing the embedding and loss function in SSL-MAE can produce better
downstream results compared to the original SSL-MAE. In addition, we determined
that only pretraining TransUNet embedding and encoder with SSL-MAE does not
work as well as TransUNet without SSL-MAE pretraining on downstream
segmentation tasks.
Related papers
- Self-Supervised Multiple Instance Learning for Acute Myeloid Leukemia Classification [1.1874560263468232]
Diseases like Acute Myeloid Leukemia (AML) pose challenges due to scarce and costly annotations on a single-cell level.
Multiple Instance Learning (MIL) addresses weakly labeled scenarios but necessitates powerful encoders typically trained with labeled data.
In this study, we explore Self-Supervised Learning (SSL) as a pre-training approach for MIL-based subtype AML classification from blood smears.
arXiv Detail & Related papers (2024-03-08T15:16:15Z) - ASLseg: Adapting SAM in the Loop for Semi-supervised Liver Tumor Segmentation [2.3617131367115705]
Liver tumor segmentation is essential for computer-aided diagnosis, surgical planning, and prognosis evaluation.
Semi-Supervised Learning (SSL) is a common technique to address these challenges.
We propose a novel semi-supervised framework, named ASLseg, which can effectively adapt the SAM to the SSL setting.
arXiv Detail & Related papers (2023-12-13T08:31:26Z) - Self-Supervised Learning for Endoscopic Video Analysis [16.873220533299573]
Self-supervised learning (SSL) has led to important breakthroughs in computer vision by allowing learning from large amounts of unlabeled data.
We study the use of a leading SSL framework, namely Masked Siamese Networks (MSNs), for endoscopic video analysis such as colonoscopy and laparoscopy.
arXiv Detail & Related papers (2023-08-23T19:27:59Z) - CroSSL: Cross-modal Self-Supervised Learning for Time-series through
Latent Masking [11.616031590118014]
CroSSL allows for handling missing modalities and end-to-end cross-modal learning.
We evaluate our method on a wide range of data, including motion sensors.
arXiv Detail & Related papers (2023-07-31T17:10:10Z) - PCRLv2: A Unified Visual Information Preservation Framework for
Self-supervised Pre-training in Medical Image Analysis [56.63327669853693]
We propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics.
We also address the preservation of scale information, a powerful tool in aiding image understanding.
The proposed unified SSL framework surpasses its self-supervised counterparts on various tasks.
arXiv Detail & Related papers (2023-01-02T17:47:27Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - DATA: Domain-Aware and Task-Aware Pre-training [94.62676913928831]
We present DATA, a simple yet effective NAS approach specialized for self-supervised learning (SSL)
Our method achieves promising results across a wide range of computation costs on downstream tasks, including image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2022-03-17T02:38:49Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - Medical Instrument Segmentation in 3D US by Hybrid Constrained
Semi-Supervised Learning [62.13520959168732]
We propose a semi-supervised learning framework for instrument segmentation in 3D US.
To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument.
Our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume.
arXiv Detail & Related papers (2021-07-30T07:59:45Z) - Semi-supervised Medical Image Classification with Global Latent Mixing [8.330337646455957]
Computer-aided diagnosis via deep learning relies on large-scale annotated data sets.
Semi-supervised learning mitigates this challenge by leveraging unlabeled data.
We present a novel SSL approach that trains the neural network on linear mixing of labeled and unlabeled data.
arXiv Detail & Related papers (2020-05-22T14:49:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.