3D B-mode ultrasound speckle reduction using deep learning for 3D
registration applications
- URL: http://arxiv.org/abs/2008.01147v1
- Date: Mon, 3 Aug 2020 19:29:59 GMT
- Title: 3D B-mode ultrasound speckle reduction using deep learning for 3D
registration applications
- Authors: Hongliang Li, Tal Mezheritsky, Liset Vazquez Romaguera, Samuel Kadoury
- Abstract summary: We show that our deep learning framework can obtain similar suppression and mean preservation index (1.066) on speckle reduction when compared to conventional filtering approaches.
It is found that the speckle reduction using our deep learning model contributes to improving the 3D registration performance.
- Score: 8.797635433767423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) speckles are granular patterns which can impede image
post-processing tasks, such as image segmentation and registration.
Conventional filtering approaches are commonly used to remove US speckles,
while their main drawback is long run-time in a 3D scenario. Although a few
studies were conducted to remove 2D US speckles using deep learning, to our
knowledge, there is no study to perform speckle reduction of 3D B-mode US using
deep learning. In this study, we propose a 3D dense U-Net model to process 3D
US B-mode data from a clinical US system. The model's results were applied to
3D registration. We show that our deep learning framework can obtain similar
suppression and mean preservation index (1.066) on speckle reduction when
compared to conventional filtering approaches (0.978), while reducing the
runtime by two orders of magnitude. Moreover, it is found that the speckle
reduction using our deep learning model contributes to improving the 3D
registration performance. The mean square error of 3D registration on 3D data
using 3D U-Net speckle reduction is reduced by half compared to that with
speckles.
Related papers
- DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - UniPAD: A Universal Pre-training Paradigm for Autonomous Driving [74.34701012543968]
We present UniPAD, a novel self-supervised learning paradigm applying 3D differentiable rendering.
UniPAD implicitly encodes 3D space, facilitating the reconstruction of continuous 3D shape structures.
Our method significantly improves lidar-, camera-, and lidar-camera-based baseline by 9.1, 7.7, and 6.9 NDS, respectively.
arXiv Detail & Related papers (2023-10-12T14:39:58Z) - Data Efficient 3D Learner via Knowledge Transferred from 2D Model [30.077342050473515]
We deal with the data scarcity challenge of 3D tasks by transferring knowledge from strong 2D models via RGB-D images.
We utilize a strong and well-trained semantic segmentation model for 2D images to augment RGB-D images with pseudo-label.
Our method already outperforms existing state-of-the-art that is tailored for 3D label efficiency.
arXiv Detail & Related papers (2022-03-16T09:14:44Z) - Fast mesh denoising with data driven normal filtering using deep
variational autoencoders [6.25118865553438]
We propose a fast and robust denoising method for dense 3D scanned industrial models.
The proposed approach employs conditional variational autoencoders to effectively filter face normals.
For 3D models with more than 1e4 faces, the presented pipeline is twice as fast as methods with equivalent reconstruction error.
arXiv Detail & Related papers (2021-11-24T20:25:15Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Synthetic Training for Monocular Human Mesh Recovery [100.38109761268639]
This paper aims to estimate 3D mesh of multiple body parts with large-scale differences from a single RGB image.
The main challenge is lacking training data that have complete 3D annotations of all body parts in 2D images.
We propose a depth-to-scale (D2S) projection to incorporate the depth difference into the projection function to derive per-joint scale variants.
arXiv Detail & Related papers (2020-10-27T03:31:35Z) - Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual
Learning [13.844630500061378]
Current methods for 3D volume reconstruction from freehand US scans require external tracking devices to provide spatial position for every frame.
We propose a deep contextual learning network (DCL-Net), which can efficiently exploit the image feature relationship between US frames and reconstruct 3D US volumes without any tracking device.
arXiv Detail & Related papers (2020-06-13T18:37:30Z) - 3D Self-Supervised Methods for Medical Imaging [7.65168530693281]
We propose 3D versions for five different self-supervised methods, in the form of proxy tasks.
Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation.
The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks.
arXiv Detail & Related papers (2020-06-06T09:56:58Z) - Exemplar Fine-Tuning for 3D Human Model Fitting Towards In-the-Wild 3D
Human Pose Estimation [107.07047303858664]
Large-scale human datasets with 3D ground-truth annotations are difficult to obtain in the wild.
We address this problem by augmenting existing 2D datasets with high-quality 3D pose fits.
The resulting annotations are sufficient to train from scratch 3D pose regressor networks that outperform the current state-of-the-art on in-the-wild benchmarks.
arXiv Detail & Related papers (2020-04-07T20:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.