3D WaveUNet: 3D Wavelet Integrated Encoder-Decoder Network for Neuron
Segmentation
- URL: http://arxiv.org/abs/2106.00259v1
- Date: Tue, 1 Jun 2021 06:46:50 GMT
- Title: 3D WaveUNet: 3D Wavelet Integrated Encoder-Decoder Network for Neuron
Segmentation
- Authors: Qiufu Li and Linlin Shen
- Abstract summary: We propose a 3D wavelet and deep learning based 3D neuron segmentation method.
The integrated 3D wavelets can efficiently improve the performance of 3D neuron segmentation and reconstruction.
- Score: 24.708228159529824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D neuron segmentation is a key step for the neuron digital reconstruction,
which is essential for exploring brain circuits and understanding brain
functions. However, the fine line-shaped nerve fibers of neuron could spread in
a large region, which brings great computational cost to the segmentation in 3D
neuronal images. Meanwhile, the strong noises and disconnected nerve fibers in
the image bring great challenges to the task. In this paper, we propose a 3D
wavelet and deep learning based 3D neuron segmentation method. The neuronal
image is first partitioned into neuronal cubes to simplify the segmentation
task. Then, we design 3D WaveUNet, the first 3D wavelet integrated
encoder-decoder network, to segment the nerve fibers in the cubes; the wavelets
could assist the deep networks in suppressing data noise and connecting the
broken fibers. We also produce a Neuronal Cube Dataset (NeuCuDa) using the
biggest available annotated neuronal image dataset, BigNeuron, to train 3D
WaveUNet. Finally, the nerve fibers segmented in cubes are assembled to
generate the complete neuron, which is digitally reconstructed using an
available automatic tracing algorithm. The experimental results show that our
neuron segmentation method could completely extract the target neuron in noisy
neuronal images. The integrated 3D wavelets can efficiently improve the
performance of 3D neuron segmentation and reconstruction. The code and
pre-trained models for this work will be available at
https://github.com/LiQiufu/3D-WaveUNet.
Related papers
- NeuroFly: A framework for whole-brain single neuron reconstruction [17.93211301158225]
We introduce NeuroFly, a validated framework for large-scale automatic single neuron reconstruction.
NeuroFly breaks down the process into three distinct stages: segmentation, connection, and proofreading.
Our goal is to foster collaboration among researchers to address the neuron reconstruction challenge.
arXiv Detail & Related papers (2024-11-07T13:56:13Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Boosting 3D Neuron Segmentation with 2D Vision Transformer Pre-trained on Natural Images [10.790999324557179]
We propose a novel training paradigm that leverages a 2D Vision Transformer model pre-trained on large-scale natural images.
Our method builds a knowledge sharing connection between the abundant natural and the scarce neuron image domains to improve the 3D neuron segmentation ability.
Evaluated on a popular benchmark, BigNeuron, our method enhances neuron segmentation performance by 8.71% over the model trained from scratch.
arXiv Detail & Related papers (2024-05-04T14:57:28Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - SegNeRF: 3D Part Segmentation with Neural Radiance Fields [63.12841224024818]
SegNeRF is a neural field representation that integrates a semantic field along with the usual radiance field.
SegNeRF is capable of simultaneously predicting geometry, appearance, and semantic information from posed images, even for unseen objects.
SegNeRF is able to generate an explicit 3D model from a single image of an object taken in the wild, with its corresponding part segmentation.
arXiv Detail & Related papers (2022-11-21T07:16:03Z) - PointNeuron: 3D Neuron Reconstruction via Geometry and Topology Learning
of Point Clouds [18.738943602529805]
We propose a novel framework for 3D neuron reconstruction.
Our key idea is to use the geometric representation power of the point cloud to better explore the intrinsic structural information of neurons.
arXiv Detail & Related papers (2022-10-15T14:11:56Z) - Voxel-wise Cross-Volume Representation Learning for 3D Neuron
Reconstruction [27.836007480393953]
We propose a novel voxel-level cross-volume representation learning paradigm on the basis of an encoder-decoder segmentation model.
Our method introduces no extra cost during inference.
Evaluated on 42 3D neuron images from BigNeuron project, our proposed method is demonstrated to improve the learning ability of the original segmentation model.
arXiv Detail & Related papers (2021-08-14T12:17:45Z) - Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from
Single and Multiple Images [56.652027072552606]
We propose a novel framework for single-view and multi-view 3D object reconstruction, named Pix2Vox++.
By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image.
A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume.
arXiv Detail & Related papers (2020-06-22T13:48:09Z) - Self-Supervised Feature Extraction for 3D Axon Segmentation [7.181047714452116]
Existing learning-based methods to automatically trace axons in 3D brain imagery often rely on manually annotated segmentation labels.
We propose a self-supervised auxiliary task that utilizes the tube-like structure of axons to build a feature extractor from unlabeled data.
We demonstrate improved segmentation performance over the 3D U-Net model on both the SHIELD PVGPe dataset and the BigNeuron Project, single neuron Janelia dataset.
arXiv Detail & Related papers (2020-04-20T20:46:04Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.