Weakly Supervised Context Encoder using DICOM metadata in Ultrasound
Imaging
- URL: http://arxiv.org/abs/2003.09070v1
- Date: Fri, 20 Mar 2020 02:17:03 GMT
- Title: Weakly Supervised Context Encoder using DICOM metadata in Ultrasound
Imaging
- Authors: Szu-Yeu Hu, Shuhang Wang, Wei-Hung Weng, JingChao Wang, XiaoHong Wang,
Arinc Ozturk, Qian Li, Viksit Kumar, Anthony E. Samir
- Abstract summary: We leverage DICOM metadata from ultrasound images to help learn representations of the ultrasound image.
We demonstrate that the proposed method outperforms the non-metadata based approaches across different downstream tasks.
- Score: 7.370841471918351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep learning algorithms geared towards clinical adaption rely on a
significant amount of high fidelity labeled data. Low-resource settings pose
challenges like acquiring high fidelity data and becomes the bottleneck for
developing artificial intelligence applications. Ultrasound images, stored in
Digital Imaging and Communication in Medicine (DICOM) format, have additional
metadata data corresponding to ultrasound image parameters and medical exams.
In this work, we leverage DICOM metadata from ultrasound images to help learn
representations of the ultrasound image. We demonstrate that the proposed
method outperforms the non-metadata based approaches across different
downstream tasks.
Related papers
- S-CycleGAN: Semantic Segmentation Enhanced CT-Ultrasound Image-to-Image Translation for Robotic Ultrasonography [2.07180164747172]
We introduce an advanced deep learning model, dubbed S-CycleGAN, which generates high-quality synthetic ultrasound images from computed tomography (CT) data.
The synthetic images are utilized to enhance various aspects of our development of the robot-assisted ultrasound scanning system.
arXiv Detail & Related papers (2024-06-03T10:53:45Z) - Automatic classification of prostate MR series type using image content and metadata [1.0959281779554237]
We propose a deep-learning method for classification of prostate cancer scanning sequences based on a combination of image data and DICOM metadata.
We demonstrate superior results compared to metadata or image data alone.
arXiv Detail & Related papers (2024-04-16T20:30:16Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Ultrasound Image Classification using ACGAN with Small Training Dataset [0.0]
Training deep learning models requires large labeled datasets, which is often unavailable for ultrasound images.
We exploit Generative Adversarial Network (ACGAN) that combines the benefits of large data augmentation and transfer learning.
We conduct experiment on a dataset of breast ultrasound images that shows the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-01-31T11:11:24Z) - Deep data compression for approximate ultrasonic image formation [1.0266286487433585]
In ultrasonic imaging systems, data acquisition and image formation are performed on separate computing devices.
Deep neural networks are optimized to preserve the image quality of a particular image formation method.
arXiv Detail & Related papers (2020-09-04T16:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.