AnyStar: Domain randomized universal star-convex 3D instance
segmentation
- URL: http://arxiv.org/abs/2307.07044v1
- Date: Thu, 13 Jul 2023 20:01:26 GMT
- Title: AnyStar: Domain randomized universal star-convex 3D instance
segmentation
- Authors: Neel Dey, S. Mazdak Abulnaga, Benjamin Billot, Esra Abaci Turk, P.
Ellen Grant, Adrian V. Dalca, Polina Golland
- Abstract summary: We present AnyStar, a domain-randomized generative model that simulates synthetic data, orientation, and bloblike objects with randomized appearance.
As a result, networks using our generative model do not require annotated images from unseen datasets.
- Score: 8.670653580154895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Star-convex shapes arise across bio-microscopy and radiology in the form of
nuclei, nodules, metastases, and other units. Existing instance segmentation
networks for such structures train on densely labeled instances for each
dataset, which requires substantial and often impractical manual annotation
effort. Further, significant reengineering or finetuning is needed when
presented with new datasets and imaging modalities due to changes in contrast,
shape, orientation, resolution, and density. We present AnyStar, a
domain-randomized generative model that simulates synthetic training data of
blob-like objects with randomized appearance, environments, and imaging physics
to train general-purpose star-convex instance segmentation networks. As a
result, networks trained using our generative model do not require annotated
images from unseen datasets. A single network trained on our synthesized data
accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence
microscopy, mouse cortical nuclei in micro-CT, zebrafish brain nuclei in EM,
and placental cotyledons in human fetal MRI, all without any retraining,
finetuning, transfer learning, or domain adaptation. Code is available at
https://github.com/neel-dey/AnyStar.
Related papers
- Learning General-Purpose Biomedical Volume Representations using Randomized Synthesis [9.355513913682794]
Current biomedical foundation models struggle to generalize as public 3D datasets are small.
We propose a data engine that synthesizes highly variable training samples that enable generalization to new biomedical contexts.
To then train a single 3D network for any voxel-level task, we develop a contrastive learning method that pretrains the network to be stable against nuisance imaging variation simulated by the data engine.
arXiv Detail & Related papers (2024-11-04T18:40:46Z) - μ-Net: A Deep Learning-Based Architecture for μ-CT Segmentation [2.012378666405002]
X-ray computed microtomography (mu-CT) is a non-destructive technique that can generate high-resolution 3D images of the internal anatomy of medical and biological samples.
extracting relevant information from 3D images requires semantic segmentation of the regions of interest.
We propose a novel framework that uses a convolutional neural network (CNN) to automatically segment the full morphology of the heart of Carassius auratus.
arXiv Detail & Related papers (2024-06-24T15:29:08Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - 3D Mitochondria Instance Segmentation with Spatio-Temporal Transformers [101.44668514239959]
We propose a hybrid encoder-decoder framework that efficiently computes spatial and temporal attentions in parallel.
We also introduce a semantic clutter-background adversarial loss during training that aids in the region of mitochondria instances from the background.
arXiv Detail & Related papers (2023-03-21T17:58:49Z) - NASDM: Nuclei-Aware Semantic Histopathology Image Generation Using
Diffusion Models [3.2996723916635267]
First-of-its-kind nuclei-aware semantic tissue generation framework (NASDM)
NASDM can synthesize realistic tissue samples given a semantic instance mask of up to six different nuclei types.
These synthetic images are useful in applications in pathology, validation of models, and supplementation of existing nuclei segmentation datasets.
arXiv Detail & Related papers (2023-03-20T22:16:03Z) - Semi-Supervised Segmentation of Mitochondria from Electron Microscopy
Images Using Spatial Continuity [3.631638087834872]
We propose a semi-supervised deep learning model that segments mitochondria by leveraging the spatial continuity of their structural, morphological, and contextual information.
Our model achieves performance similar to that of state-of-the-art fully supervised models but requires only 20% of their annotated training data.
arXiv Detail & Related papers (2022-06-06T06:52:19Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Learning Signal-Agnostic Manifolds of Neural Fields [50.066449953522685]
We leverage neural fields to capture the underlying structure in image, shape, audio and cross-modal audiovisual domains.
We show that by walking across the underlying manifold of GEM, we may generate new samples in our signal domains.
arXiv Detail & Related papers (2021-11-11T18:57:40Z) - 3D fluorescence microscopy data synthesis for segmentation and
benchmarking [0.9922927990501083]
Conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy.
An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics.
A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms.
arXiv Detail & Related papers (2021-07-21T16:08:56Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.