Voxel-wise Cross-Volume Representation Learning for 3D Neuron
Reconstruction
- URL: http://arxiv.org/abs/2108.06522v1
- Date: Sat, 14 Aug 2021 12:17:45 GMT
- Title: Voxel-wise Cross-Volume Representation Learning for 3D Neuron
Reconstruction
- Authors: Heng Wang, Chaoyi Zhang, Jianhui Yu, Yang Song, Siqi Liu, Wojciech
Chrzanowski, Weidong Cai
- Abstract summary: We propose a novel voxel-level cross-volume representation learning paradigm on the basis of an encoder-decoder segmentation model.
Our method introduces no extra cost during inference.
Evaluated on 42 3D neuron images from BigNeuron project, our proposed method is demonstrated to improve the learning ability of the original segmentation model.
- Score: 27.836007480393953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic 3D neuron reconstruction is critical for analysing the morphology
and functionality of neurons in brain circuit activities. However, the
performance of existing tracing algorithms is hinged by the low image quality.
Recently, a series of deep learning based segmentation methods have been
proposed to improve the quality of raw 3D optical image stacks by removing
noises and restoring neuronal structures from low-contrast background. Due to
the variety of neuron morphology and the lack of large neuron datasets, most of
current neuron segmentation models rely on introducing complex and
specially-designed submodules to a base architecture with the aim of encoding
better feature representations. Though successful, extra burden would be put on
computation during inference. Therefore, rather than modifying the base
network, we shift our focus to the dataset itself. The encoder-decoder backbone
used in most neuron segmentation models attends only intra-volume voxel points
to learn structural features of neurons but neglect the shared intrinsic
semantic features of voxels belonging to the same category among different
volumes, which is also important for expressive representation learning. Hence,
to better utilise the scarce dataset, we propose to explicitly exploit such
intrinsic features of voxels through a novel voxel-level cross-volume
representation learning paradigm on the basis of an encoder-decoder
segmentation model. Our method introduces no extra cost during inference.
Evaluated on 42 3D neuron images from BigNeuron project, our proposed method is
demonstrated to improve the learning ability of the original segmentation model
and further enhancing the reconstruction performance.
Related papers
- DINeuro: Distilling Knowledge from 2D Natural Images via Deformable Tubular Transferring Strategy for 3D Neuron Reconstruction [10.100192103585925]
Reconstructing neuron morphology from 3D light microscope imaging data is critical to aid neuroscientists in analyzing brain networks and neuroanatomy.
We propose a deformable tubular transferring strategy that adapts the pre-trained 2D natural knowledge to the inherent tubular characteristics of neuronal structure in the latent embedding space.
arXiv Detail & Related papers (2024-10-29T14:36:03Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Boosting 3D Neuron Segmentation with 2D Vision Transformer Pre-trained on Natural Images [10.790999324557179]
We propose a novel training paradigm that leverages a 2D Vision Transformer model pre-trained on large-scale natural images.
Our method builds a knowledge sharing connection between the abundant natural and the scarce neuron image domains to improve the 3D neuron segmentation ability.
Evaluated on a popular benchmark, BigNeuron, our method enhances neuron segmentation performance by 8.71% over the model trained from scratch.
arXiv Detail & Related papers (2024-05-04T14:57:28Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Deep Learning for real-time neural decoding of grasp [0.0]
We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
arXiv Detail & Related papers (2023-11-02T08:26:29Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - PointNeuron: 3D Neuron Reconstruction via Geometry and Topology Learning
of Point Clouds [18.738943602529805]
We propose a novel framework for 3D neuron reconstruction.
Our key idea is to use the geometric representation power of the point cloud to better explore the intrinsic structural information of neurons.
arXiv Detail & Related papers (2022-10-15T14:11:56Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Single Neuron Segmentation using Graph-based Global Reasoning with
Auxiliary Skeleton Loss from 3D Optical Microscope Images [30.539098538610013]
We present an end-to-end segmentation network by jointly considering the local appearance and the global geometry traits.
The evaluation results on the Janelia dataset from the BigNeuron project demonstrate that our proposed method exceeds the counterpart algorithms in performance.
arXiv Detail & Related papers (2021-01-22T01:27:14Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.