Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging
- URL: http://arxiv.org/abs/2212.07867v1
- Date: Thu, 15 Dec 2022 14:34:12 GMT
- Title: Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging
- Authors: Jianzhi Long, Jicang Cai, Abdullah Al-Battal, Shiwei Jin, Jing Zhang,
Dacheng Tao, Truong Nguyen
- Abstract summary: With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
- Score: 61.60067283680348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound is progressing toward becoming an affordable and versatile
solution to medical imaging. With the advent of COVID-19 global pandemic, there
is a need to fully automate ultrasound imaging as it requires trained operators
in close proximity to patients for long period of time. In this work, we
investigate the important yet seldom-studied problem of scan target
localization, under the setting of lung ultrasound imaging. We propose a purely
vision-based, data driven method that incorporates learning-based computer
vision techniques. We combine a human pose estimation model with a specially
designed regression model to predict the lung ultrasound scan targets, and
deploy multiview stereo vision to enhance the consistency of 3D target
localization. While related works mostly focus on phantom experiments, we
collect data from 30 human subjects for testing. Our method attains an accuracy
level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69){\deg} for probe
orientation, with a success rate above 80% under an error threshold of 25mm for
all scan targets. Moreover, our approach can serve as a general solution to
other types of ultrasound modalities. The code for implementation has been
released.
Related papers
- FUSC: Fetal Ultrasound Semantic Clustering of Second Trimester Scans
Using Deep Self-supervised Learning [1.0819408603463427]
More than 140M fetuses are born yearly, resulting in numerous scans.
The availability of a large volume of fetal ultrasound scans presents the opportunity to train robust machine learning models.
This study presents an unsupervised approach for automatically clustering ultrasound images into a large range of fetal views.
arXiv Detail & Related papers (2023-10-19T09:11:23Z) - RUSOpt: Robotic UltraSound Probe Normalization with Bayesian
Optimization for In-plane and Out-plane Scanning [4.420121239028863]
Proper orientation of the robotized probe plays a crucial role in governing the quality of ultrasound images.
We propose a sample-efficient method to automatically adjust the orientation of the ultrasound probe normal to the point of contact on the scanning surface.
arXiv Detail & Related papers (2023-10-05T09:22:16Z) - Towards Realistic Ultrasound Fetal Brain Imaging Synthesis [0.7315240103690552]
There are few public ultrasound fetal imaging datasets due to insufficient amounts of clinical data, patient privacy, rare occurrence of abnormalities in general practice, and limited experts for data collection and validation.
To address such data scarcity, we proposed generative adversarial networks (GAN)-based models, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise images of fetal ultrasound brain planes from one public dataset.
arXiv Detail & Related papers (2023-04-08T07:07:20Z) - COVID-Net USPro: An Open-Source Explainable Few-Shot Deep Prototypical
Network to Monitor and Detect COVID-19 Infection from Point-of-Care
Ultrasound Images [66.63200823918429]
COVID-Net USPro monitors and detects COVID-19 positive cases with high precision and recall from minimal ultrasound images.
The network achieves 99.65% overall accuracy, 99.7% recall and 99.67% precision for COVID-19 positive cases when trained with only 5 shots.
arXiv Detail & Related papers (2023-01-04T16:05:51Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification [55.96221340756895]
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
arXiv Detail & Related papers (2020-10-06T13:55:02Z) - Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound [4.893896929103368]
The system employs an artificial neural network that receives the ultrasound video signal and the motion signal of an inertial measurement unit (IMU) that is attached to the probe.
The network termed US-GuideNet predicts either the movement towards the standard plane position (goal prediction), or the next movement that an expert sonographer would perform.
arXiv Detail & Related papers (2020-07-08T23:58:41Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - POCOVID-Net: Automatic Detection of COVID-19 From a New Lung Ultrasound
Imaging Dataset (POCUS) [0.5330327625867509]
We advocate a more prominent role of point-of-care ultrasound imaging to guide COVID-19 detection.
We gather a lung ultrasound (POCUS) dataset consisting of 1103 images (654 COVID-19, 277 bacterial pneumonia and 172 healthy controls), sampled from 64 videos.
We train a deep convolutional neural network (POCOVID-Net) on this 3-class dataset and achieve an accuracy of 89% and, by a majority vote, a video accuracy of 92%.
arXiv Detail & Related papers (2020-04-25T08:41:24Z) - A Deep Learning Approach for Motion Forecasting Using 4D OCT Data [69.62333053044712]
We propose 4D-temporal deep learning for end-to-end motion forecasting and estimation using a stream of OCT volumes.
Our best performing 4D method achieves motion forecasting with an overall average correlation of 97.41%, while also improving motion estimation performance by a factor of 2.5 compared to a previous 3D approach.
arXiv Detail & Related papers (2020-04-21T15:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.