Pose-GuideNet: Automatic Scanning Guidance for Fetal Head Ultrasound from Pose Estimation
- URL: http://arxiv.org/abs/2408.09931v1
- Date: Mon, 19 Aug 2024 12:11:50 GMT
- Title: Pose-GuideNet: Automatic Scanning Guidance for Fetal Head Ultrasound from Pose Estimation
- Authors: Qianhui Men, Xiaoqing Guo, Aris T. Papageorghiou, J. Alison Noble,
- Abstract summary: 3D pose estimation from a 2D cross-sectional view enables healthcare professionals to navigate through the 3D space.
In this work, we investigate how estimating 3D fetal pose from freehand 2D ultrasound scanning can guide a sonographer to locate a head standard plane.
- Score: 13.187011661009459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D pose estimation from a 2D cross-sectional view enables healthcare professionals to navigate through the 3D space, and such techniques initiate automatic guidance in many image-guided radiology applications. In this work, we investigate how estimating 3D fetal pose from freehand 2D ultrasound scanning can guide a sonographer to locate a head standard plane. Fetal head pose is estimated by the proposed Pose-GuideNet, a novel 2D/3D registration approach to align freehand 2D ultrasound to a 3D anatomical atlas without the acquisition of 3D ultrasound. To facilitate the 2D to 3D cross-dimensional projection, we exploit the prior knowledge in the atlas to align the standard plane frame in a freehand scan. A semantic-aware contrastive-based approach is further proposed to align the frames that are off standard planes based on their anatomical similarity. In the experiment, we enhance the existing assessment of freehand image localization by comparing the transformation of its estimated pose towards standard plane with the corresponding probe motion, which reflects the actual view change in 3D anatomy. Extensive results on two clinical head biometry tasks show that Pose-GuideNet not only accurately predicts pose but also successfully predicts the direction of the fetal head. Evaluations with probe motions further demonstrate the feasibility of adopting Pose-GuideNet for freehand ultrasound-assisted navigation in a sensor-free environment.
Related papers
- Structure-aware World Model for Probe Guidance via Large-scale Self-supervised Pre-train [66.35766658717205]
Successful echocardiography requires a thorough understanding of the structures on the two-dimensional plane and the spatial relationships between planes in three-dimensional space.
We propose a large-scale self-supervised pre-training method to acquire a cardiac structure-aware world model.
arXiv Detail & Related papers (2024-06-28T08:54:44Z) - Neural Voting Field for Camera-Space 3D Hand Pose Estimation [106.34750803910714]
We present a unified framework for camera-space 3D hand pose estimation from a single RGB image based on 3D implicit representation.
We propose a novel unified 3D dense regression scheme to estimate camera-space 3D hand pose via dense 3D point-wise voting in camera frustum.
arXiv Detail & Related papers (2023-05-07T16:51:34Z) - Ultrasound Plane Pose Regression: Assessing Generalized Pose Coordinates
in the Fetal Brain [9.465965149145559]
We aim to build a US plane localization system for 3D visualization, training, and guidance without integrating additional sensors.
This work builds on our previous work, which predicts the six-dimensional (6D) pose of arbitrarily oriented US planes slicing the fetal brain.
We investigate the impact of registration quality in the training and testing data and its subsequent effect on trained models.
arXiv Detail & Related papers (2023-01-19T21:16:36Z) - Adaptive 3D Localization of 2D Freehand Ultrasound Brain Images [18.997300579859978]
We propose AdLocUI, a framework that Adaptively Localizes 2D Ultrasound Images in the 3D anatomical atlas.
We first train a convolutional neural network with 2D slices sampled from co-aligned 3D ultrasound volumes to predict their locations.
We fine-tune it with 2D freehand ultrasound images using a novel unsupervised cycle consistency.
arXiv Detail & Related papers (2022-09-12T17:59:41Z) - Comparison of Depth Estimation Setups from Stereo Endoscopy and Optical
Tracking for Point Measurements [1.1084983279967584]
To support minimally-invasive mitral valve repair, quantitative measurements from the valve can be obtained using an infra-red tracked stylus.
Hand-eye calibration is required that links both coordinate systems and is a prerequisite to project the points onto the image plane.
A complementary approach to this is to use a vision-based endoscopic stereo-setup to detect and triangulate points of interest, to obtain the 3D coordinates.
Preliminary results indicate that 3D landmark estimation, either labeled manually or through partly automated detection with a deep learning approach, provides more accurate triangulated depth measurements when performed with a tailored image-based method than
arXiv Detail & Related papers (2022-01-26T10:15:46Z) - PONet: Robust 3D Human Pose Estimation via Learning Orientations Only [116.1502793612437]
We propose a novel Pose Orientation Net (PONet) that is able to robustly estimate 3D pose by learning orientations only.
PONet estimates the 3D orientation of these limbs by taking advantage of the local image evidence to recover the 3D pose.
We evaluate our method on multiple datasets, including Human3.6M, MPII, MPI-INF-3DHP, and 3DPW.
arXiv Detail & Related papers (2021-12-21T12:48:48Z) - IGCN: Image-to-graph Convolutional Network for 2D/3D Deformable
Registration [1.2246649738388387]
We propose an image-to-graph convolutional network that achieves deformable registration of a 3D organ mesh for a single-viewpoint 2D projection image.
We show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from radiographs with clinically acceptable accuracy.
arXiv Detail & Related papers (2021-10-31T12:48:37Z) - 3D Reconstruction and Alignment by Consumer RGB-D Sensors and Fiducial
Planar Markers for Patient Positioning in Radiation Therapy [1.7744342894757368]
This paper proposes a fast and cheap patient positioning method based on inexpensive consumer level RGB-D sensors.
The proposed method relies on a 3D reconstruction approach that fuses, in real-time, artificial and natural visual landmarks recorded from a hand-held RGB-D sensor.
arXiv Detail & Related papers (2021-03-22T20:20:59Z) - Model-based 3D Hand Reconstruction via Self-Supervised Learning [72.0817813032385]
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity.
We propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint.
For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations.
arXiv Detail & Related papers (2021-03-22T10:12:43Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Tattoo tomography: Freehand 3D photoacoustic image reconstruction with
an optical pattern [49.240017254888336]
Photoacoustic tomography (PAT) is a novel imaging technique that can resolve both morphological and functional tissue properties.
A current drawback is the limited field-of-view provided by the conventionally applied 2D probes.
We present a novel approach to 3D reconstruction of PAT data that does not require an external tracking system.
arXiv Detail & Related papers (2020-11-10T09:27:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.