Breast lesion segmentation in ultrasound images with limited annotated
data
- URL: http://arxiv.org/abs/2001.07322v1
- Date: Tue, 21 Jan 2020 03:34:42 GMT
- Title: Breast lesion segmentation in ultrasound images with limited annotated
data
- Authors: Bahareh Behboodi, Mina Amiri, Rupert Brooks, Hassan Rivaz
- Abstract summary: We propose the use of simulated US images and natural images as auxiliary datasets in order to pre-train our segmentation network.
We show that fine-tuning the pre-trained network improves the dice score by 21% compared to training from scratch.
- Score: 2.905751301655124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) is one of the most commonly used imaging modalities in both
diagnosis and surgical interventions due to its low-cost, safety, and
non-invasive characteristic. US image segmentation is currently a unique
challenge because of the presence of speckle noise. As manual segmentation
requires considerable efforts and time, the development of automatic
segmentation algorithms has attracted researchers attention. Although recent
methodologies based on convolutional neural networks have shown promising
performances, their success relies on the availability of a large number of
training data, which is prohibitively difficult for many applications.
Therefore, in this study we propose the use of simulated US images and natural
images as auxiliary datasets in order to pre-train our segmentation network,
and then to fine-tune with limited in vivo data. We show that with as little as
19 in vivo images, fine-tuning the pre-trained network improves the dice score
by 21% compared to training from scratch. We also demonstrate that if the same
number of natural and simulation US images is available, pre-training on
simulation data is preferable.
Related papers
- Semantic Segmentation Refiner for Ultrasound Applications with Zero-Shot Foundation Models [1.8142288667655782]
We propose a prompt-less segmentation method harnessing the ability of segmentation foundation models to segment abstract shapes.
Our method's advantages are brought to light in experiments on a small-scale musculoskeletal ultrasound images dataset.
arXiv Detail & Related papers (2024-04-25T04:21:57Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Cardiac ultrasound simulation for autonomous ultrasound navigation [4.036497185262817]
We propose a method to generate large amounts of ultrasound images from other modalities and from arbitrary positions.
We present a novel simulation pipeline which uses segmentations from other modalities, an optimized data representation and GPU-accelerated Monte Carlo path tracing.
The proposed approach allows for fast and accurate patient-specific ultrasound image generation, and its usability for training networks for navigation-related tasks is demonstrated.
arXiv Detail & Related papers (2024-02-09T15:14:48Z) - LOTUS: Learning to Optimize Task-based US representations [39.81131738128329]
Anatomical segmentation of organs in ultrasound images is essential to many clinical applications.
Existing deep neural networks require a large amount of labeled data for training in order to achieve clinically acceptable performance.
In this paper, we propose a novel approach for learning to optimize task-based ultra-sound image representations.
arXiv Detail & Related papers (2023-07-29T16:29:39Z) - Self-Supervised Endoscopic Image Key-Points Matching [1.3764085113103222]
This paper proposes a novel self-supervised approach for endoscopic image matching based on deep learning techniques.
Our method outperformed standard hand-crafted local feature descriptors in terms of precision and recall.
arXiv Detail & Related papers (2022-08-24T10:47:21Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.