W-Net: Dense Semantic Segmentation of Subcutaneous Tissue in Ultrasound
Images by Expanding U-Net to Incorporate Ultrasound RF Waveform Data
- URL: http://arxiv.org/abs/2008.12413v2
- Date: Wed, 2 Sep 2020 09:14:27 GMT
- Title: W-Net: Dense Semantic Segmentation of Subcutaneous Tissue in Ultrasound
Images by Expanding U-Net to Incorporate Ultrasound RF Waveform Data
- Authors: Gautam Rajendrakumar Gare, Jiayuan Li, Rohan Joshi, Mrunal Prashant
Vaze, Rishikesh Magar, Michael Yousefpour, Ricardo Luis Rodriguez and John
Micheal Galeotti
- Abstract summary: We present W-Net, a novel Convolution Neural Network (CNN) framework that employs raw ultrasound waveforms from each A-scan.
We seek to label every pixel in the image, without the use of a background class.
We present analysis as to why the Muscle fascia and Fat fascia/stroma are the most difficult tissues to label.
- Score: 2.9023633922848586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present W-Net, a novel Convolution Neural Network (CNN) framework that
employs raw ultrasound waveforms from each A-scan, typically referred to as
ultrasound Radio Frequency (RF) data, in addition to the gray ultrasound image
to semantically segment and label tissues. Unlike prior work, we seek to label
every pixel in the image, without the use of a background class. To the best of
our knowledge, this is also the first deep-learning or CNN approach for
segmentation that analyses ultrasound raw RF data along with the gray image.
International patent(s) pending [PCT/US20/37519]. We chose subcutaneous tissue
(SubQ) segmentation as our initial clinical goal since it has diverse
intermixed tissues, is challenging to segment, and is an underrepresented
research area. SubQ potential applications include plastic surgery, adipose
stem-cell harvesting, lymphatic monitoring, and possibly detection/treatment of
certain types of tumors. A custom dataset consisting of hand-labeled images by
an expert clinician and trainees are used for the experimentation, currently
labeled into the following categories: skin, fat, fat fascia/stroma, muscle and
muscle fascia. We compared our results with U-Net and Attention U-Net. Our
novel \emph{W-Net}'s RF-Waveform input and architecture increased mIoU accuracy
(averaged across all tissue classes) by 4.5\% and 4.9\% compared to regular
U-Net and Attention U-Net, respectively. We present analysis as to why the
Muscle fascia and Fat fascia/stroma are the most difficult tissues to label.
Muscle fascia in particular, the most difficult anatomic class to recognize for
both humans and AI algorithms, saw mIoU improvements of 13\% and 16\% from our
W-Net vs U-Net and Attention U-Net respectively.
Related papers
- Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images [0.0]
We propose a Neural Network (NN) based on U-Net and an encoder-decoder architecture.
Our network (CResU-Net) obtained 76.88%, 71.5%, 90.3%, and 97.4% in terms of Dice similarity coefficients (DSC), Intersection over Union (IoU), Area under curve (AUC), and global accuracy (ACC), respectively, on BUSI dataset.
arXiv Detail & Related papers (2024-09-01T07:47:48Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - WATUNet: A Deep Neural Network for Segmentation of Volumetric Sweep
Imaging Ultrasound [1.2903292694072621]
Volume sweep imaging (VSI) is an innovative approach that enables untrained operators to capture quality ultrasound images.
We present a novel segmentation model known as Wavelet_Attention_UNet (WATUNet)
In this model, we incorporate wavelet gates (WGs) and attention gates (AGs) between the encoder and decoder instead of a simple connection to overcome the limitations mentioned.
arXiv Detail & Related papers (2023-11-17T20:32:37Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Omni-Seg: A Single Dynamic Network for Multi-label Renal Pathology Image
Segmentation using Partially Labeled Data [6.528287373027917]
In non-cancer pathology, the learning algorithms can be asked to examine more comprehensive tissue types simultaneously.
The prior arts needed to train multiple segmentation networks in order to match the domain-specific knowledge.
By learning from 150,000 patch-wise pathological images, the proposed Omni-Seg network achieved superior segmentation accuracy and less resource consumption.
arXiv Detail & Related papers (2021-12-23T16:02:03Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Automatic Segmentation of the Prostate on 3D Trans-rectal Ultrasound
Images using Statistical Shape Models and Convolutional Neural Networks [3.9121134770873733]
We propose to segment the prostate on a dataset of trans-rectal ultrasound (TRUS) images using convolutional neural networks (CNNs) and statistical shape models (SSMs)
TRUS has limited soft tissue contrast and signal to noise ratio which makes the task of segmenting the prostate challenging.
arXiv Detail & Related papers (2021-06-17T17:11:53Z) - Global Guidance Network for Breast Lesion Segmentation in Ultrasound
Images [84.03487786163781]
We develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection modules.
Our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.
arXiv Detail & Related papers (2021-04-05T13:15:22Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - A Comparative Study of U-Net Topologies for Background Removal in
Histopathology Images [0.0]
We perform experiments on U-Net architecture with different network backbones to remove the background as well as artifacts from Whole Slide Images.
We trained and evaluated the network on a manually labeled subset of The Cancer Genome Atlas (TCGA) dataset.
arXiv Detail & Related papers (2020-06-08T16:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.