Self-supervised machine learning model for analysis of nanowire
morphologies from transmission electron microscopy images
- URL: http://arxiv.org/abs/2203.13875v1
- Date: Fri, 25 Mar 2022 19:32:03 GMT
- Title: Self-supervised machine learning model for analysis of nanowire
morphologies from transmission electron microscopy images
- Authors: Shizhao Lu, Brian Montz, Todd Emrick, Arthi Jayaraman
- Abstract summary: We present a self-supervised transfer learning approach that uses a small number of labeled microscopy images for training.
Specifically, we train an image encoder with unlabeled images and use that encoder for transfer learning of different downstream image tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the field of soft materials, microscopy is the first and often only
accessible method for structural characterization. There is a growing interest
in the development of machine learning methods that can automate the analysis
and interpretation of microscopy images. Typically training of machine learning
models require large numbers of images with associated structural labels,
however, manual labeling of images requires domain knowledge and is prone to
human error and subjectivity. To overcome these limitations, we present a
self-supervised transfer learning approach that uses a small number of labeled
microscopy images for training and performs as effectively as methods trained
on significantly larger data sets. Specifically, we train an image encoder with
unlabeled images and use that encoder for transfer learning of different
downstream image tasks (classification and segmentation) with a minimal number
of labeled images for training.
Related papers
- DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Multi-domain learning CNN model for microscopy image classification [3.2835754110596236]
We present a multi-domain learning architecture for the classification of microscopy images.
Unlike previous methods that are computationally intensive, we have developed a compact model, called Mobincep.
It surpasses state-of-the-art results and is robust for limited labeled data.
arXiv Detail & Related papers (2023-04-20T19:32:23Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Learning with minimal effort: leveraging in silico labeling for cell and
nucleus segmentation [0.6465251961564605]
We propose to use In Silico Labeling (ISL) as a pretraining scheme for segmentation tasks.
By comparing segmentation performance across several training set sizes, we show that such a scheme can dramatically reduce the number of required annotations.
arXiv Detail & Related papers (2023-01-10T11:35:14Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Digital Fingerprinting of Microstructures [44.139970905896504]
Finding efficient means of fingerprinting microstructural information is a critical step towards harnessing data-centric machine learning approaches.
Here, we consider microstructure classification and utilise the resulting features over a range of related machine learning tasks.
In particular, methods that leverage transfer learning with convolutional neural networks (CNNs), pretrained on the ImageNet dataset, are generally shown to outperform other methods.
arXiv Detail & Related papers (2022-03-25T15:40:44Z) - 3D fluorescence microscopy data synthesis for segmentation and
benchmarking [0.9922927990501083]
Conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy.
An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics.
A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms.
arXiv Detail & Related papers (2021-07-21T16:08:56Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.