Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual
Learning
- URL: http://arxiv.org/abs/2006.07694v1
- Date: Sat, 13 Jun 2020 18:37:30 GMT
- Title: Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual
Learning
- Authors: Hengtao Guo, Sheng Xu, Bradford Wood, Pingkun Yan
- Abstract summary: Current methods for 3D volume reconstruction from freehand US scans require external tracking devices to provide spatial position for every frame.
We propose a deep contextual learning network (DCL-Net), which can efficiently exploit the image feature relationship between US frames and reconstruct 3D US volumes without any tracking device.
- Score: 13.844630500061378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transrectal ultrasound (US) is the most commonly used imaging modality to
guide prostate biopsy and its 3D volume provides even richer context
information. Current methods for 3D volume reconstruction from freehand US
scans require external tracking devices to provide spatial position for every
frame. In this paper, we propose a deep contextual learning network (DCL-Net),
which can efficiently exploit the image feature relationship between US frames
and reconstruct 3D US volumes without any tracking device. The proposed DCL-Net
utilizes 3D convolutions over a US video segment for feature extraction. An
embedded self-attention module makes the network focus on the speckle-rich
areas for better spatial movement prediction. We also propose a novel case-wise
correlation loss to stabilize the training process for improved accuracy.
Highly promising results have been obtained by using the developed method. The
experiments with ablation studies demonstrate superior performance of the
proposed method by comparing against other state-of-the-art methods. Source
code of this work is publicly available at
https://github.com/DIAL-RPI/FreehandUSRecon.
Related papers
- OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation [67.56268991234371]
OV-Uni3DETR achieves the state-of-the-art performance on various scenarios, surpassing existing methods by more than 6% on average.
Code and pre-trained models will be released later.
arXiv Detail & Related papers (2024-03-28T17:05:04Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - HoloPOCUS: Portable Mixed-Reality 3D Ultrasound Tracking, Reconstruction
and Overlay [2.069072041357411]
HoloPOCUS is a mixed reality US system that overlays rich US information onto the user's vision in a point-of-care setting.
We validated a tracking pipeline that demonstrates higher accuracy compared to existing MR-US works.
arXiv Detail & Related papers (2023-08-26T09:28:20Z) - NeRF-Det: Learning Geometry-Aware Volumetric Representation for
Multi-View 3D Object Detection [65.02633277884911]
We present NeRF-Det, a novel method for indoor 3D detection with posed RGB images as input.
Our method makes use of NeRF in an end-to-end manner to explicitly estimate 3D geometry, thereby improving 3D detection performance.
arXiv Detail & Related papers (2023-07-27T04:36:16Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - Planar 3D Transfer Learning for End to End Unimodal MRI Unbalanced Data
Segmentation [0.0]
We present a novel approach of 2D to 3D transfer learning based on mapping pre-trained 2D convolutional neural network weights into planar 3D kernels.
The method is validated by the proposed planar 3D res-u-net network with encoder transferred from the 2D VGG-16.
arXiv Detail & Related papers (2020-11-23T17:11:50Z) - 3D B-mode ultrasound speckle reduction using deep learning for 3D
registration applications [8.797635433767423]
We show that our deep learning framework can obtain similar suppression and mean preservation index (1.066) on speckle reduction when compared to conventional filtering approaches.
It is found that the speckle reduction using our deep learning model contributes to improving the 3D registration performance.
arXiv Detail & Related papers (2020-08-03T19:29:59Z) - 3D Self-Supervised Methods for Medical Imaging [7.65168530693281]
We propose 3D versions for five different self-supervised methods, in the form of proxy tasks.
Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation.
The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks.
arXiv Detail & Related papers (2020-06-06T09:56:58Z) - Region Proposal Network with Graph Prior and IoU-Balance Loss for
Landmark Detection in 3D Ultrasound [16.523977092204813]
3D ultrasound (US) can facilitate detailed prenatal examinations for fetal growth monitoring.
To analyze a 3D US volume, it is fundamental to identify anatomical landmarks accurately.
We exploit an object detection framework to detect landmarks in 3D fetal facial US volumes.
arXiv Detail & Related papers (2020-04-01T03:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.