Semantic Segmentation of Neuronal Bodies in Fluorescence Microscopy
Using a 2D+3D CNN Training Strategy with Sparsely Annotated Data
- URL: http://arxiv.org/abs/2009.00029v2
- Date: Wed, 2 Sep 2020 00:37:53 GMT
- Title: Semantic Segmentation of Neuronal Bodies in Fluorescence Microscopy
Using a 2D+3D CNN Training Strategy with Sparsely Annotated Data
- Authors: Filippo Maria Castelli, Matteo Roffilli, Giacomo Mazzamuto, Irene
Costantini, Ludovico Silvestri and Francesco Saverio Pavone
- Abstract summary: Bidimensional CNNs yield good results in neuron localization but lead to inaccurate surface reconstruction.
3D CNNs would require manually annotated data on a large scale and hence considerable human effort.
We propose a two-phase strategy for training native 3D CNN models on sparse 2D annotations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Semantic segmentation of neuronal structures in 3D high-resolution
fluorescence microscopy imaging of the human brain cortex can take advantage of
bidimensional CNNs, which yield good results in neuron localization but lead to
inaccurate surface reconstruction. 3D CNNs, on the other hand, would require
manually annotated volumetric data on a large scale and hence considerable
human effort. Semi-supervised alternative strategies which make use only of
sparse annotations suffer from longer training times and achieved models tend
to have increased capacity compared to 2D CNNs, needing more ground truth data
to attain similar results. To overcome these issues we propose a two-phase
strategy for training native 3D CNN models on sparse 2D annotations where
missing labels are inferred by a 2D CNN model and combined with manual
annotations in a weighted manner during loss calculation.
Related papers
- Self-supervised learning via inter-modal reconstruction and feature
projection networks for label-efficient 3D-to-2D segmentation [4.5206601127476445]
We propose a novel convolutional neural network (CNN) and self-supervised learning (SSL) method for label-efficient 3D-to-2D segmentation.
Results on different datasets demonstrate that the proposed CNN significantly improves the state of the art in scenarios with limited labeled data by up to 8% in Dice score.
arXiv Detail & Related papers (2023-07-06T14:16:25Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Decomposing 3D Neuroimaging into 2+1D Processing for Schizophrenia
Recognition [25.80846093248797]
We propose to process the 3D data by a 2+1D framework so that we can exploit the powerful deep 2D Convolutional Neural Network (CNN) networks pre-trained on the huge ImageNet dataset for 3D neuroimaging recognition.
Specifically, 3D volumes of Magnetic Resonance Imaging (MRI) metrics are decomposed to 2D slices according to neighboring voxel positions.
Global pooling is applied to remove redundant information as the activation patterns are sparsely distributed over feature maps.
Channel-wise and slice-wise convolutions are proposed to aggregate the contextual information in the third dimension unprocessed by the 2D CNN model.
arXiv Detail & Related papers (2022-11-21T15:22:59Z) - Enforcing connectivity of 3D linear structures using their 2D
projections [54.0598511446694]
We propose to improve the 3D connectivity of our results by minimizing a sum of topology-aware losses on their 2D projections.
This suffices to increase the accuracy and to reduce the annotation effort required to provide the required annotated training data.
arXiv Detail & Related papers (2022-07-14T11:42:18Z) - Hyperspectral Image Classification: Artifacts of Dimension Reduction on
Hybrid CNN [1.2875323263074796]
2D and 3D CNN models have proved highly efficient in exploiting the spatial and spectral information of Hyperspectral Images.
This work proposed a lightweight CNN (3D followed by 2D-CNN) model which significantly reduces the computational cost.
arXiv Detail & Related papers (2021-01-25T18:43:57Z) - Learning Hybrid Representations for Automatic 3D Vessel Centerline
Extraction [57.74609918453932]
Automatic blood vessel extraction from 3D medical images is crucial for vascular disease diagnoses.
Existing methods may suffer from discontinuities of extracted vessels when segmenting such thin tubular structures from 3D images.
We argue that preserving the continuity of extracted vessels requires to take into account the global geometry.
We propose a hybrid representation learning approach to address this challenge.
arXiv Detail & Related papers (2020-12-14T05:22:49Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - DDU-Nets: Distributed Dense Model for 3D MRI Brain Tumor Segmentation [27.547646527286886]
Three patterns of distributed dense connections (DDCs) are proposed to enhance feature reuse and propagation of CNNs.
For better detecting and segmenting brain tumors from 3D MR images, CNN-based models embedded with DDCs (DDU-Nets) are trained efficiently from pixel to pixel.
The proposed method is evaluated on the BraTS 2019 dataset with results demonstrating the effectiveness of the DDU-Nets.
arXiv Detail & Related papers (2020-03-03T05:08:34Z) - 2.75D: Boosting learning by representing 3D Medical imaging to 2D
features for small data [54.223614679807994]
3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks.
Applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models.
In this work, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D.
As a result, 2D CNN networks can also be used to learn volumetric information.
arXiv Detail & Related papers (2020-02-11T08:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.