3D Convolutional Neural Networks for Dendrite Segmentation Using
Fine-Tuning and Hyperparameter Optimization
- URL: http://arxiv.org/abs/2205.01167v1
- Date: Mon, 2 May 2022 19:20:05 GMT
- Title: 3D Convolutional Neural Networks for Dendrite Segmentation Using
Fine-Tuning and Hyperparameter Optimization
- Authors: Jim James, Nathan Pruyne, Tiberiu Stan, Marcus Schwarting, Jiwon Yeom,
Seungbum Hong, Peter Voorhees, Ben Blaiszik, Ian Foster
- Abstract summary: We train 3D convolutional neural networks (CNNs) to segment 3D datasets.
The trained 3D CNNs are able to segment entire 852 x 852 x 250 voxel 3D volumes in only 60 seconds.
- Score: 0.06323908398583082
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dendritic microstructures are ubiquitous in nature and are the primary
solidification morphologies in metallic materials. Techniques such as x-ray
computed tomography (XCT) have provided new insights into dendritic phase
transformation phenomena. However, manual identification of dendritic
morphologies in microscopy data can be both labor intensive and potentially
ambiguous. The analysis of 3D datasets is particularly challenging due to their
large sizes (terabytes) and the presence of artifacts scattered within the
imaged volumes. In this study, we trained 3D convolutional neural networks
(CNNs) to segment 3D datasets. Three CNN architectures were investigated,
including a new 3D version of FCDense. We show that using hyperparameter
optimization (HPO) and fine-tuning techniques, both 2D and 3D CNN architectures
can be trained to outperform the previous state of the art. The 3D U-Net
architecture trained in this study produced the best segmentations according to
quantitative metrics (pixel-wise accuracy of 99.84% and a boundary displacement
error of 0.58 pixels), while 3D FCDense produced the smoothest boundaries and
best segmentations according to visual inspection. The trained 3D CNNs are able
to segment entire 852 x 852 x 250 voxel 3D volumes in only ~60 seconds, thus
hastening the progress towards a deeper understanding of phase transformation
phenomena such as dendritic solidification.
Related papers
- Simultaneous Alignment and Surface Regression Using Hybrid 2D-3D
Networks for 3D Coherent Layer Segmentation of Retinal OCT Images with Full
and Sparse Annotations [32.69359482975795]
This work presents a novel framework based on hybrid 2D-3D convolutional neural networks (CNNs) to obtain continuous 3D retinal layer surfaces from OCT volumes.
Experiments on a synthetic dataset and three public clinical datasets show that our framework can effectively align the B-scans for potential motion correction.
arXiv Detail & Related papers (2023-12-04T08:32:31Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Simultaneous Alignment and Surface Regression Using Hybrid 2D-3D
Networks for 3D Coherent Layer Segmentation of Retina OCT Images [33.99874168018807]
In this study, a novel framework based on hybrid 2D-3D convolutional neural networks (CNNs) is proposed to obtain continuous 3D retinal layer surfaces from OCT.
Our framework achieves superior results to state-of-the-art 2D methods in terms of both layer segmentation accuracy and cross-B-scan 3D continuity.
arXiv Detail & Related papers (2022-03-04T15:55:09Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - A modular U-Net for automated segmentation of X-ray tomography images in
composite materials [0.0]
Deep learning has demonstrated success in many image processing tasks, including material science applications.
In this paper a modular interpretation of UNet is proposed and trained to segment 3D tomography images of a three-phased glass fiber-reinforced Polyamide 66.
We observe that human-comparable results can be achievied even with only 10 annotated layers and using a shallow U-Net yields better results than a deeper one.
arXiv Detail & Related papers (2021-07-15T17:15:24Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Uniformizing Techniques to Process CT scans with 3D CNNs for
Tuberculosis Prediction [5.270882613122642]
A common approach to medical image analysis on volumetric data uses deep 2D convolutional neural networks (CNNs)
dealing with the individual slices independently in 2D CNNs deliberately discards the depth information which results in poor performance for the intended task.
We evaluate a set of volume uniformizing methods to address the aforementioned issues.
We report 73% area under curve (AUC) and binary classification accuracy (ACC) of 67.5% on the test set beating all methods which leveraged only image information.
arXiv Detail & Related papers (2020-07-26T21:53:47Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - 2.75D: Boosting learning by representing 3D Medical imaging to 2D
features for small data [54.223614679807994]
3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks.
Applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models.
In this work, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D.
As a result, 2D CNN networks can also be used to learn volumetric information.
arXiv Detail & Related papers (2020-02-11T08:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.