Adaptive 3D Localization of 2D Freehand Ultrasound Brain Images
- URL: http://arxiv.org/abs/2209.05477v1
- Date: Mon, 12 Sep 2022 17:59:41 GMT
- Title: Adaptive 3D Localization of 2D Freehand Ultrasound Brain Images
- Authors: Pak-Hei Yeung, Moska Aliasi, Monique Haak, The INTERGROWTH-21st
Consortium, Weidi Xie, Ana I.L. Namburete
- Abstract summary: We propose AdLocUI, a framework that Adaptively Localizes 2D Ultrasound Images in the 3D anatomical atlas.
We first train a convolutional neural network with 2D slices sampled from co-aligned 3D ultrasound volumes to predict their locations.
We fine-tune it with 2D freehand ultrasound images using a novel unsupervised cycle consistency.
- Score: 18.997300579859978
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two-dimensional (2D) freehand ultrasound is the mainstay in prenatal care and
fetal growth monitoring. The task of matching corresponding cross-sectional
planes in the 3D anatomy for a given 2D ultrasound brain scan is essential in
freehand scanning, but challenging. We propose AdLocUI, a framework that
Adaptively Localizes 2D Ultrasound Images in the 3D anatomical atlas without
using any external tracking sensor.. We first train a convolutional neural
network with 2D slices sampled from co-aligned 3D ultrasound volumes to predict
their locations in the 3D anatomical atlas. Next, we fine-tune it with 2D
freehand ultrasound images using a novel unsupervised cycle consistency, which
utilizes the fact that the overall displacement of a sequence of images in the
3D anatomical atlas is equal to the displacement from the first image to the
last in that sequence. We demonstrate that AdLocUI can adapt to three different
ultrasound datasets, acquired with different machines and protocols, and
achieves significantly better localization accuracy than the baselines. AdLocUI
can be used for sensorless 2D freehand ultrasound guidance by the bedside. The
source code is available at https://github.com/pakheiyeung/AdLocUI.
Related papers
- Pose-GuideNet: Automatic Scanning Guidance for Fetal Head Ultrasound from Pose Estimation [13.187011661009459]
3D pose estimation from a 2D cross-sectional view enables healthcare professionals to navigate through the 3D space.
In this work, we investigate how estimating 3D fetal pose from freehand 2D ultrasound scanning can guide a sonographer to locate a head standard plane.
arXiv Detail & Related papers (2024-08-19T12:11:50Z) - Cross-Dimensional Medical Self-Supervised Representation Learning Based on a Pseudo-3D Transformation [68.60747298865394]
We propose a new cross-dimensional SSL framework based on a pseudo-3D transformation (CDSSL-P3D)
Specifically, we introduce an image transformation based on the im2col algorithm, which converts 2D images into a format consistent with 3D data.
This transformation enables seamless integration of 2D and 3D data, and facilitates cross-dimensional self-supervised learning for 3D medical image analysis.
arXiv Detail & Related papers (2024-06-03T02:57:25Z) - RapidVol: Rapid Reconstruction of 3D Ultrasound Volumes from Sensorless 2D Scans [12.837508334426529]
We propose RapidVol: a neural representation framework to speed up slice-to-volume ultrasound reconstruction.
A set of 2D ultrasound scans, with their ground truth (or estimated) 3D position and orientation (pose) is all that is required to form a complete 3D reconstruction.
When compared to prior approaches, our method is over 3x quicker, 46% more accurate, and if given inaccurate poses is more robust.
arXiv Detail & Related papers (2024-04-16T17:50:09Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - DSGN++: Exploiting Visual-Spatial Relation forStereo-based 3D Detectors [60.88824519770208]
Camera-based 3D object detectors are welcome due to their wider deployment and lower price than LiDAR sensors.
We revisit the prior stereo modeling DSGN about the stereo volume constructions for representing both 3D geometry and semantics.
We propose our approach, DSGN++, aiming for improving information flow throughout the 2D-to-3D pipeline.
arXiv Detail & Related papers (2022-04-06T18:43:54Z) - 3D-OOCS: Learning Prostate Segmentation with Inductive Bias [5.907824204733372]
We introduce OOCS-enhanced networks, a novel architecture inspired by the innate nature of visual processing in the vertebrates.
With different 3D U-Net variants as the base, we add two 3D residual components to the second encoder blocks: on and off center-surround.
OOCS helps 3D U-Nets to scrutinise and delineate anatomical structures present in 3D images with increased accuracy.
arXiv Detail & Related papers (2021-10-29T10:14:56Z) - ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit
Representation [13.71137201718831]
The objective of this work is to achieve sensorless reconstruction of a 3D volume from a set of 2D freehand ultrasound images with deep implicit representation.
In contrast to the conventional way that represents a 3D volume as a discrete voxel grid, we do so by parameterizing it as the zero level-set of a continuous function.
Our proposed model, as ImplicitVol, takes a set of 2D scans and their estimated locations in 3D as input, jointly re?fing the estimated 3D locations and learning a full reconstruction of the 3D volume.
arXiv Detail & Related papers (2021-09-24T17:59:18Z) - FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection [78.00922683083776]
It is non-trivial to make a general adapted 2D detector work in this 3D task.
In this technical report, we study this problem with a practice built on fully convolutional single-stage detector.
Our solution achieves 1st place out of all the vision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020.
arXiv Detail & Related papers (2021-04-22T09:35:35Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Bidirectional Projection Network for Cross Dimension Scene Understanding [69.29443390126805]
We present a emphbidirectional projection network (BPNet) for joint 2D and 3D reasoning in an end-to-end manner.
Via the emphBPM, complementary 2D and 3D information can interact with each other in multiple architectural levels.
Our emphBPNet achieves top performance on the ScanNetV2 benchmark for both 2D and 3D semantic segmentation.
arXiv Detail & Related papers (2021-03-26T08:31:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.