Multiview and Multiclass Image Segmentation using Deep Learning in Fetal
Echocardiography
- URL: http://arxiv.org/abs/2103.12245v1
- Date: Tue, 23 Mar 2021 00:33:23 GMT
- Title: Multiview and Multiclass Image Segmentation using Deep Learning in Fetal
Echocardiography
- Authors: Ken C. L. Wong, Elena S. Sinkovskaya, Alfred Z. Abuhamad, Tanveer
Syeda-Mahmood
- Abstract summary: Congenital heart disease (CHD) is the most common congenital abnormality associated with birth defects in the United States.
Computer-aided detection of CHD can play a critical role in prenatal care by improving screening and diagnosis.
- Score: 0.45880283710344055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Congenital heart disease (CHD) is the most common congenital abnormality
associated with birth defects in the United States. Despite training efforts
and substantial advancement in ultrasound technology over the past years, CHD
remains an abnormality that is frequently missed during prenatal
ultrasonography. Therefore, computer-aided detection of CHD can play a critical
role in prenatal care by improving screening and diagnosis. Since many CHDs
involve structural abnormalities, automatic segmentation of anatomical
structures is an important step in the analysis of fetal echocardiograms. While
existing methods mainly focus on the four-chamber view with a small number of
structures, here we present a more comprehensive deep learning segmentation
framework covering 14 anatomical structures in both three-vessel trachea and
four-chamber views. Specifically, our framework enhances the V-Net with spatial
dropout, group normalization, and deep supervision to train a segmentation
model that can be applied on both views regardless of abnormalities. By
identifying the pitfall of using the Dice loss when some labels are unavailable
in some images, this framework integrates information from multiple views and
is robust to missing structures due to anatomical anomalies, achieving an
average Dice score of 79%.
Related papers
- Multi-task learning for joint weakly-supervised segmentation and aortic
arch anomaly classification in fetal cardiac MRI [2.7962860265843563]
We present a framework for automated fetal vessel segmentation from 3D black blood T2w MRI and anomaly classification.
We target 11 cardiac vessels and three distinct aortic arch anomalies, including double aortic arch, right aortic arch, and suspected coarctation of the aorta.
Our results showcase that our proposed training strategy significantly outperforms label propagation and a network trained exclusively on propagated labels.
arXiv Detail & Related papers (2023-11-13T10:54:53Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Mixed Attention with Deep Supervision for Delineation of COVID Infection
in Lung CT [0.24366811507669117]
A novel deep learning architecture, Mixed Attention Deeply Supervised Network (MiADS-Net), is proposed for delineating the infected regions of the lung from CT images.
MiADS-Net outperforms several state-of-the-art architectures in the COVID-19 lesion segmentation task.
arXiv Detail & Related papers (2023-01-17T15:36:27Z) - Factored Attention and Embedding for Unstructured-view Topic-related
Ultrasound Report Generation [70.7778938191405]
We propose a novel factored attention and embedding model (termed FAE-Gen) for the unstructured-view topic-related ultrasound report generation.
The proposed FAE-Gen mainly consists of two modules, i.e., view-guided factored attention and topic-oriented factored embedding, which capture the homogeneous and heterogeneous morphological characteristic across different views.
arXiv Detail & Related papers (2022-03-12T15:24:03Z) - Towards A Device-Independent Deep Learning Approach for the Automated
Segmentation of Sonographic Fetal Brain Structures: A Multi-Center and
Multi-Device Validation [0.0]
We propose a DL based segmentation framework for the automated segmentation of 10 key fetal brain structures from 2 axial planes from fetal brain USG images (2D)
The proposed DL system offered a promising and generalizable performance (multi-centers, multi-device) and also presents evidence in support of device-induced variation in image quality.
arXiv Detail & Related papers (2022-02-28T05:42:03Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - Spatio-spectral deep learning methods for in-vivo hyperspectral
laryngeal cancer detection [49.32653090178743]
Early detection of head and neck tumors is crucial for patient survival.
Hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors.
We present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI.
arXiv Detail & Related papers (2020-04-21T17:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.