A deep learning pipeline for identification of motor units in
musculoskeletal ultrasound
- URL: http://arxiv.org/abs/2010.03028v1
- Date: Wed, 23 Sep 2020 20:44:29 GMT
- Title: A deep learning pipeline for identification of motor units in
musculoskeletal ultrasound
- Authors: Hazrat Ali, Johannes Umander, Robin Rohl\'en and Christer Gr\"onlund
- Abstract summary: It has been shown that ultrafast ultrasound can be used to record and analyze the mechanical response of individual MUs.
We present an alternative method - a deep learning pipeline - to identify active MUs in ultrasound image sequences.
We train evaluate the model using simulated data mimicking the complex activation pattern and overlapping territories.
- Score: 0.5249805590164902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound imaging provides information from a large part of the muscle. It
has recently been shown that ultrafast ultrasound imaging can be used to record
and analyze the mechanical response of individual MUs using blind source
separation. In this work, we present an alternative method - a deep learning
pipeline - to identify active MUs in ultrasound image sequences, including
segmentation of their territories and signal estimation of their mechanical
responses (twitch train). We train and evaluate the model using simulated data
mimicking the complex activation pattern of tens of activated MUs with
overlapping territories and partially synchronized activation patterns. Using a
slow fusion approach (based on 3D CNNs), we transform the spatiotemporal image
sequence data to 2D representations and apply a deep neural network
architecture for segmentation. Next, we employ a second deep neural network
architecture for signal estimation. The results show that the proposed pipeline
can effectively identify individual MUs, estimate their territories, and
estimate their twitch train signal at low contraction forces. The framework can
retain spatio-temporal consistencies and information of the mechanical response
of MU activity even when the ultrasound image sequences are transformed into a
2D representation for compatibility with more traditional computer vision and
image processing techniques. The proposed pipeline is potentially useful to
identify simultaneously active MUs in whole muscles in ultrasound image
sequences of voluntary skeletal muscle contractions at low force levels.
Related papers
- Vascular Segmentation of Functional Ultrasound Images using Deep Learning [0.0]
We introduce the first deep learning-based segmentation tool for functional ultrasound (fUS) images.
We achieve competitive segmentation performance, with 90% accuracy, with 71% robustness and an IU of 0.59, using only 100 temporal frames from a fUS stack.
This work offers a non-invasive, cost-effective alternative to localization microscopy, enhancing fUS data interpretation and improving understanding of vessel function.
arXiv Detail & Related papers (2024-10-28T09:00:28Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Real-Time Model-Based Quantitative Ultrasound and Radar [65.268245109828]
We propose a neural network based on the physical model of wave propagation, which defines the relationship between the received signals and physical properties.
Our network can reconstruct multiple physical properties in less than one second for complex and realistic scenarios.
arXiv Detail & Related papers (2024-02-16T09:09:16Z) - Cardiac ultrasound simulation for autonomous ultrasound navigation [4.036497185262817]
We propose a method to generate large amounts of ultrasound images from other modalities and from arbitrary positions.
We present a novel simulation pipeline which uses segmentations from other modalities, an optimized data representation and GPU-accelerated Monte Carlo path tracing.
The proposed approach allows for fast and accurate patient-specific ultrasound image generation, and its usability for training networks for navigation-related tasks is demonstrated.
arXiv Detail & Related papers (2024-02-09T15:14:48Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - LOTUS: Learning to Optimize Task-based US representations [39.81131738128329]
Anatomical segmentation of organs in ultrasound images is essential to many clinical applications.
Existing deep neural networks require a large amount of labeled data for training in order to achieve clinically acceptable performance.
In this paper, we propose a novel approach for learning to optimize task-based ultra-sound image representations.
arXiv Detail & Related papers (2023-07-29T16:29:39Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Deep learning facilitates fully automated brain image registration of
optoacoustic tomography and magnetic resonance imaging [6.9975936496083495]
Multi-spectral optoacoustic tomography (MSOT) is an emerging optical imaging method providing multiplex molecular and functional information from the rodent brain.
It can be greatly augmented by magnetic resonance imaging (MRI) that offers excellent soft-tissue contrast and high-resolution brain anatomy.
registration of multi-modal images remains challenging, chiefly due to the entirely different image contrast rendered by these modalities.
Here we propose a fully automated registration method for MSOT-MRI multimodal imaging empowered by deep learning.
arXiv Detail & Related papers (2021-09-04T14:50:44Z) - A Spatiotemporal Volumetric Interpolation Network for 4D Dynamic Medical
Image [18.670134909724723]
We introduce a volumetric motion network (VINS) designed for 4D dynamic medical images.
Experimental results demonstrated that our SVIN outperformed state-of-the-art temporal medical methods and natural video methods.
arXiv Detail & Related papers (2020-02-28T12:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.