Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction
- URL: http://arxiv.org/abs/2108.00274v1
- Date: Sat, 31 Jul 2021 16:06:50 GMT
- Title: Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction
- Authors: Mingyuan Luo, Xin Yang, Xiaoqiong Huang, Yuhao Huang, Yuxin Zou, Xindi
Hu, Nishant Ravikumar, Alejandro F Frangi, Dong Ni
- Abstract summary: 3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
- Score: 61.62191904755521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D ultrasound (US) is widely used for its rich diagnostic information.
However, it is criticized for its limited field of view. 3D freehand US
reconstruction is promising in addressing the problem by providing broad range
and freeform scan. The existing deep learning based methods only focus on the
basic cases of skill sequences, and the model relies on the training data
heavily. The sequences in real clinical practice are a mix of diverse skills
and have complex scanning paths. Besides, deep models should adapt themselves
to the testing cases with prior knowledge for better robustness, rather than
only fit to the training cases. In this paper, we propose a novel approach to
sensorless freehand 3D US reconstruction considering the complex skill
sequences. Our contribution is three-fold. First, we advance a novel online
learning framework by designing a differentiable reconstruction algorithm. It
realizes an end-to-end optimization from section sequences to the reconstructed
volume. Second, a self-supervised learning method is developed to explore the
context information that reconstructed by the testing data itself, promoting
the perception of the model. Third, inspired by the effectiveness of shape
prior, we also introduce adversarial training to strengthen the learning of
anatomical shape prior in the reconstructed volume. By mining the context and
structural cues of the testing data, our online learning methods can drive the
model to handle complex skill sequences. Experimental results on developmental
dysplasia of the hip US and fetal US datasets show that, our proposed method
can outperform the start-of-the-art methods regarding the shift errors and path
similarities.
Related papers
- Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction [34.93595625809309]
Prior machine learning approaches require reference reconstructions computed by another algorithm for training.
We train our model in a fully self-supervised manner using only noisy 2D X-ray data.
Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions.
arXiv Detail & Related papers (2022-12-14T13:21:37Z) - Anatomy-guided domain adaptation for 3D in-bed human pose estimation [62.3463429269385]
3D human pose estimation is a key component of clinical monitoring systems.
We present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain.
Our method consistently outperforms various state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-11-22T11:34:51Z) - BYOLMed3D: Self-Supervised Representation Learning of Medical Videos
using Gradient Accumulation Assisted 3D BYOL Framework [0.0]
Supervised learning algorithms require a large volumes of balanced data to learn robust representations.
Self-supervised learning algorithms are robust to imbalance in the data and are capable of learning robust representations.
We train a 3D BYOL self-supervised model using gradient accumulation technique to deal with the large number of samples in a batch generally required in a self-supervised algorithm.
arXiv Detail & Related papers (2022-07-31T14:48:06Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Advancing 3D Medical Image Analysis with Variable Dimension Transform
based Supervised 3D Pre-training [45.90045513731704]
This paper revisits an innovative yet simple fully-supervised 3D network pre-training framework.
With a redesigned 3D network architecture, reformulated natural images are used to address the problem of data scarcity.
Comprehensive experiments on four benchmark datasets demonstrate that the proposed pre-trained models can effectively accelerate convergence.
arXiv Detail & Related papers (2022-01-05T03:11:21Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - Deep Optimized Priors for 3D Shape Modeling and Reconstruction [38.79018852887249]
We introduce a new learning framework for 3D modeling and reconstruction.
We show that the proposed strategy effectively breaks the barriers constrained by the pre-trained priors.
arXiv Detail & Related papers (2020-12-14T03:56:31Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.