Combining Variational Autoencoders and Physical Bias for Improved
Microscopy Data Analysis
- URL: http://arxiv.org/abs/2302.04216v2
- Date: Wed, 7 Jun 2023 20:13:14 GMT
- Title: Combining Variational Autoencoders and Physical Bias for Improved
Microscopy Data Analysis
- Authors: Arpan Biswas, Maxim Ziatdinov and Sergei V. Kalinin
- Abstract summary: We present a physics augmented machine learning method which disentangles factors of variability within the data.
Our method is applied to various materials, including NiO-LSMO, BiFeO3, and graphene.
The results demonstrate the effectiveness of our approach in extracting meaningful information from large volumes of imaging data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electron and scanning probe microscopy produce vast amounts of data in the
form of images or hyperspectral data, such as EELS or 4D STEM, that contain
information on a wide range of structural, physical, and chemical properties of
materials. To extract valuable insights from these data, it is crucial to
identify physically separate regions in the data, such as phases, ferroic
variants, and boundaries between them. In order to derive an easily
interpretable feature analysis, combining with well-defined boundaries in a
principled and unsupervised manner, here we present a physics augmented machine
learning method which combines the capability of Variational Autoencoders to
disentangle factors of variability within the data and the physics driven loss
function that seeks to minimize the total length of the discontinuities in
images corresponding to latent representations. Our method is applied to
various materials, including NiO-LSMO, BiFeO3, and graphene. The results
demonstrate the effectiveness of our approach in extracting meaningful
information from large volumes of imaging data. The fully notebook containing
implementation of the code and analysis workflow is available at
https://github.com/arpanbiswas52/PaperNotebooks
Related papers
- DiffRenderGAN: Addressing Training Data Scarcity in Deep Segmentation Networks for Quantitative Nanomaterial Analysis through Differentiable Rendering and Generative Modelling [0.1135917885955104]
Deep learning segmentation networks enable automated insights and replace subjective methods with precise quantitative analysis.
We introduce DiffRenderGAN, a novel generative model designed to produce annotated synthetic data.
This approach reduces the need for manual intervention and enhances segmentation performance compared to existing synthetic data methods.
arXiv Detail & Related papers (2025-02-13T16:41:44Z) - PolSAM: Polarimetric Scattering Mechanism Informed Segment Anything Model [76.95536611263356]
PolSAR data presents unique challenges due to its rich and complex characteristics.
Existing data representations, such as complex-valued data, polarimetric features, and amplitude images, are widely used.
Most feature extraction networks for PolSAR are small, limiting their ability to capture features effectively.
We propose the Polarimetric Scattering Mechanism-Informed SAM (PolSAM), an enhanced Segment Anything Model (SAM) that integrates domain-specific scattering characteristics and a novel prompt generation strategy.
arXiv Detail & Related papers (2024-12-17T09:59:53Z) - MaskTerial: A Foundation Model for Automated 2D Material Flake Detection [48.73213960205105]
We present a deep learning model, called MaskTerial, that uses an instance segmentation network to reliably identify 2D material flakes.
The model is extensively pre-trained using a synthetic data generator, that generates realistic microscopy images from unlabeled data.
We demonstrate significant improvements over existing techniques in the detection of low-contrast materials such as hexagonal boron nitride.
arXiv Detail & Related papers (2024-12-12T15:01:39Z) - Unlocking Potential Binders: Multimodal Pretraining DEL-Fusion for Denoising DNA-Encoded Libraries [51.72836644350993]
Multimodal Pretraining DEL-Fusion model (MPDF)
We develop pretraining tasks applying contrastive objectives between different compound representations and their text descriptions.
We propose a novel DEL-fusion framework that amalgamates compound information at the atomic, submolecular, and molecular levels.
arXiv Detail & Related papers (2024-09-07T17:32:21Z) - Invariant Discovery of Features Across Multiple Length Scales: Applications in Microscopy and Autonomous Materials Characterization [3.386918190302773]
Variational Autoencoders (VAEs) have emerged as powerful tools for identifying underlying factors of variation in image data.
We introduce the scale-invariant VAE approach (SI-VAE) based on the progressive training of the VAE with the descriptors sampled at different length scales.
arXiv Detail & Related papers (2024-08-01T01:48:46Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Deep Learning of Crystalline Defects from TEM images: A Solution for the
Problem of "Never Enough Training Data" [0.0]
In-situ TEM experiments can provide important insights into how dislocations behave and move.
The analysis of individual video frames can provide useful insights but is limited by the capabilities of automated identification.
In this work, a parametric model for generating synthetic training data for segmentation of dislocations is developed.
arXiv Detail & Related papers (2023-07-12T17:37:46Z) - Physics and Chemistry from Parsimonious Representations: Image Analysis
via Invariant Variational Autoencoders [0.0]
Variational autoencoders (VAEs) are emerging as a powerful paradigm for the unsupervised data analysis.
This article summarizes recent developments in VAEs, covering the basic principles and intuition behind the VAEs.
arXiv Detail & Related papers (2023-03-30T03:16:27Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - Synthetic Image Rendering Solves Annotation Problem in Deep Learning
Nanoparticle Segmentation [5.927116192179681]
We show that using a rendering software allows to generate realistic, synthetic training data to train a state-of-the art deep neural network.
We derive a segmentation accuracy that is comparable to man-made annotations for toxicologically relevant metal-oxide nanoparticles ensembles.
arXiv Detail & Related papers (2020-11-20T17:05:36Z) - Data-Driven Discovery of Molecular Photoswitches with Multioutput
Gaussian Processes [51.17758371472664]
Photoswitchable molecules display two or more isomeric forms that may be accessed using light.
We present a data-driven discovery pipeline for molecular photoswitches underpinned by dataset curation and multitask learning.
We validate our proposed approach experimentally by screening a library of commercially available photoswitchable molecules.
arXiv Detail & Related papers (2020-06-28T20:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.