Multimodal registration of FISH and nanoSIMS images using convolutional
neural network models
- URL: http://arxiv.org/abs/2201.05545v1
- Date: Fri, 14 Jan 2022 16:35:10 GMT
- Title: Multimodal registration of FISH and nanoSIMS images using convolutional
neural network models
- Authors: Xiaojia He, Christof Meile, Suchendra M. Bhandarkar
- Abstract summary: multimodal registration of FISH and nanoSIMS images is challenging given the morphological distortion and background noise in both images.
We use convolutional neural networks (CNNs) for multiscale feature extraction and shape context for computation of the minimum transformation cost feature matching.
- Score: 14.71992435706872
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Nanoscale secondary ion mass spectrometry (nanoSIMS) and fluorescence in situ
hybridization (FISH) microscopy provide high-resolution, multimodal image
representations of the identity and cell activity respectively of targeted
microbial communities in microbiological research. Despite its importance to
microbiologists, multimodal registration of FISH and nanoSIMS images is
challenging given the morphological distortion and background noise in both
images. In this study, we use convolutional neural networks (CNNs) for
multiscale feature extraction, shape context for computation of the minimum
transformation cost feature matching and the thin-plate spline (TPS) model for
multimodal registration of the FISH and nanoSIMS images. All the six tested CNN
models, VGG16, VGG19, GoogLeNet and ShuffleNet, ResNet18 and ResNet101
performed well, demonstrating the utility of CNNs in the registration of
multimodal images with significant background noise and morphology distortion.
We also show aggregate shape preserved by binarization to be a robust feature
for registering multimodal microbiology-related images.
Related papers
- Mew: Multiplexed Immunofluorescence Image Analysis through an Efficient Multiplex Network [84.88767228835928]
We introduce Mew, a novel framework designed to efficiently process mIF images through the lens of multiplex network.
Mew innovatively constructs a multiplex network comprising two distinct layers: a Voronoi network for geometric information and a Cell-type network for capturing cell-wise homogeneity.
This framework equips a scalable and efficient Graph Neural Network (GNN), capable of processing the entire graph during training.
arXiv Detail & Related papers (2024-07-25T08:22:30Z) - I2I-Mamba: Multi-modal medical image synthesis via selective state space modeling [8.909355958696414]
We propose a novel adversarial model for medical image synthesis, I2I-Mamba, to efficiently capture long-range context.
I2I-Mamba offers superior performance against state-of-the-art CNN- and transformer-based methods in synthesizing target-modality images.
arXiv Detail & Related papers (2024-05-22T21:55:58Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - CoNeS: Conditional neural fields with shift modulation for multi-sequence MRI translation [5.662694302758443]
Multi-sequence magnetic resonance imaging (MRI) has found wide applications in both modern clinical studies and deep learning research.
It frequently occurs that one or more of the MRI sequences are missing due to different image acquisition protocols or contrast agent contraindications of patients.
One promising approach is to leverage generative models to synthesize the missing sequences, which can serve as a surrogate acquisition.
arXiv Detail & Related papers (2023-09-06T19:01:58Z) - STEM image analysis based on deep learning: identification of vacancy
defects and polymorphs of ${MoS_2}$ [0.49583061314078714]
We apply a fully convolutional network (FCN) for identification of important structural features of two-dimensional crystals.
FCN is trained with simulated images in the presence of different levels of noise, aberrations, and carbon contamination.
The accuracy of the FCN models toward extensive experimental STEM images is comparable to that of careful hands-on analysis.
arXiv Detail & Related papers (2022-06-09T04:43:56Z) - Three-dimensional microstructure generation using generative adversarial
neural networks in the context of continuum micromechanics [77.34726150561087]
This work proposes a generative adversarial network tailored towards three-dimensional microstructure generation.
The lightweight algorithm is able to learn the underlying properties of the material from a single microCT-scan without the need of explicit descriptors.
arXiv Detail & Related papers (2022-05-31T13:26:51Z) - Understanding the Influence of Receptive Field and Network Complexity in
Neural-Network-Guided TEM Image Analysis [0.0]
We systematically examine how neural network architecture choices affect how neural networks segment in transmission electron microscopy (TEM) images.
We find that for low-resolution TEM images which rely on amplitude contrast to distinguish nanoparticles from background, the receptive field does not significantly influence segmentation performance.
On the other hand, for high-resolution TEM images which rely on a combination of amplitude and phase contrast changes to identify nanoparticles, receptive field is a key parameter for increased performance.
arXiv Detail & Related papers (2022-04-08T18:45:15Z) - Super-resolution reconstruction of cytoskeleton image based on A-net
deep learning network [7.967593061012609]
We proposed an A-net network and showed that the resolution of cytoskeleton images can be significantly improved by combining the A-net deep learning network with the DWDC algorithm based on degradation model.
We successfully removed the noise and flocculent structures, which originally interfere with the cellular structure in the raw image, and improved the spatial resolution by 10 times using relatively small dataset.
arXiv Detail & Related papers (2021-12-17T15:33:47Z) - Multimodal Face Synthesis from Visual Attributes [85.87796260802223]
We propose a novel generative adversarial network that simultaneously synthesizes identity preserving multimodal face images.
multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.
arXiv Detail & Related papers (2021-04-09T13:47:23Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.