Automated Atlas-based Segmentation of Single Coronal Mouse Brain Slices
using Linear 2D-2D Registration
- URL: http://arxiv.org/abs/2111.08705v1
- Date: Tue, 16 Nov 2021 12:33:09 GMT
- Title: Automated Atlas-based Segmentation of Single Coronal Mouse Brain Slices
using Linear 2D-2D Registration
- Authors: S\'ebastien Piluso, Nicolas Souedet, Caroline Jan, C\'edric Clouchoux,
Thierry Delzescaux
- Abstract summary: This paper proposes a strategy to automatically segment single 2D coronal slices within a 3D volume of atlas, using linear registration.
We validated its robustness and performance using an exploratory approach at whole-brain scale.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A significant challenge for brain histological data analysis is to precisely
identify anatomical regions in order to perform accurate local quantifications
and evaluate therapeutic solutions. Usually, this task is performed manually,
becoming therefore tedious and subjective. Another option is to use automatic
or semi-automatic methods, among which segmentation using digital atlases
co-registration. However, most available atlases are 3D, whereas digitized
histological data are 2D. Methods to perform such 2D-3D segmentation from an
atlas are required. This paper proposes a strategy to automatically and
accurately segment single 2D coronal slices within a 3D volume of atlas, using
linear registration. We validated its robustness and performance using an
exploratory approach at whole-brain scale.
Related papers
- Rigid Single-Slice-in-Volume registration via rotation-equivariant 2D/3D feature matching [3.041742847777409]
We propose a self-supervised 2D/3D registration approach to match a single 2D slice to the corresponding 3D volume.
Results demonstrate the robustness of the proposed slice-in-volume registration on the NSCLC-Radiomics CT and KIRBY21 MRI datasets.
arXiv Detail & Related papers (2024-10-24T12:24:27Z) - Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views [10.944692719150071]
We propose a novel 3D brain segmentation approach using complementary 2D diffusion models.
Our goal is to achieve reliable segmentation quality without requiring complete labels for each individual subject.
arXiv Detail & Related papers (2024-07-17T06:14:53Z) - Segment3D: Learning Fine-Grained Class-Agnostic 3D Segmentation without
Manual Labels [141.23836433191624]
Current 3D scene segmentation methods are heavily dependent on manually annotated 3D training datasets.
We propose Segment3D, a method for class-agnostic 3D scene segmentation that produces high-quality 3D segmentation masks.
arXiv Detail & Related papers (2023-12-28T18:57:11Z) - ALSTER: A Local Spatio-Temporal Expert for Online 3D Semantic
Reconstruction [62.599588577671796]
We propose an online 3D semantic segmentation method that incrementally reconstructs a 3D semantic map from a stream of RGB-D frames.
Unlike offline methods, ours is directly applicable to scenarios with real-time constraints, such as robotics or mixed reality.
arXiv Detail & Related papers (2023-11-29T20:30:18Z) - Self-supervised learning via inter-modal reconstruction and feature
projection networks for label-efficient 3D-to-2D segmentation [4.5206601127476445]
We propose a novel convolutional neural network (CNN) and self-supervised learning (SSL) method for label-efficient 3D-to-2D segmentation.
Results on different datasets demonstrate that the proposed CNN significantly improves the state of the art in scenarios with limited labeled data by up to 8% in Dice score.
arXiv Detail & Related papers (2023-07-06T14:16:25Z) - CORPS: Cost-free Rigorous Pseudo-labeling based on Similarity-ranking
for Brain MRI Segmentation [3.1657395760137406]
We propose a semi-supervised segmentation framework built upon a novel atlas-based pseudo-labeling method and a 3D deep convolutional neural network (DCNN) for 3D brain MRI segmentation.
The experimental results demonstrate the superiority of the proposed framework over the baseline method both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-05-19T14:42:49Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - SeqXY2SeqZ: Structure Learning for 3D Shapes by Sequentially Predicting
1D Occupancy Segments From 2D Coordinates [61.04823927283092]
We propose to represent 3D shapes using 2D functions, where the output of the function at each 2D location is a sequence of line segments inside the shape.
We implement this approach using a Seq2Seq model with attention, called SeqXY2SeqZ, which learns the mapping from a sequence of 2D coordinates along two arbitrary axes to a sequence of 1D locations along the third axis.
Our experiments show that SeqXY2SeqZ outperforms the state-ofthe-art methods under widely used benchmarks.
arXiv Detail & Related papers (2020-03-12T00:24:36Z) - 2D Convolutional Neural Networks for 3D Digital Breast Tomosynthesis
Classification [20.245580301060418]
Key challenges in developing automated methods for classification are handling the variable number of slices and retaining slice-to-slice changes.
We propose a novel deep 2D convolutional neural network (CNN) architecture for classification that simultaneously overcomes both challenges.
Our approach operates on the full volume, regardless of the number of slices, and allows the use of pre-trained 2D CNNs for feature extraction.
arXiv Detail & Related papers (2020-02-27T18:32:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.