Combining Variational Autoencoders and Physical Bias for Improved
Microscopy Data Analysis
- URL: http://arxiv.org/abs/2302.04216v2
- Date: Wed, 7 Jun 2023 20:13:14 GMT
- Title: Combining Variational Autoencoders and Physical Bias for Improved
Microscopy Data Analysis
- Authors: Arpan Biswas, Maxim Ziatdinov and Sergei V. Kalinin
- Abstract summary: We present a physics augmented machine learning method which disentangles factors of variability within the data.
Our method is applied to various materials, including NiO-LSMO, BiFeO3, and graphene.
The results demonstrate the effectiveness of our approach in extracting meaningful information from large volumes of imaging data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electron and scanning probe microscopy produce vast amounts of data in the
form of images or hyperspectral data, such as EELS or 4D STEM, that contain
information on a wide range of structural, physical, and chemical properties of
materials. To extract valuable insights from these data, it is crucial to
identify physically separate regions in the data, such as phases, ferroic
variants, and boundaries between them. In order to derive an easily
interpretable feature analysis, combining with well-defined boundaries in a
principled and unsupervised manner, here we present a physics augmented machine
learning method which combines the capability of Variational Autoencoders to
disentangle factors of variability within the data and the physics driven loss
function that seeks to minimize the total length of the discontinuities in
images corresponding to latent representations. Our method is applied to
various materials, including NiO-LSMO, BiFeO3, and graphene. The results
demonstrate the effectiveness of our approach in extracting meaningful
information from large volumes of imaging data. The fully notebook containing
implementation of the code and analysis workflow is available at
https://github.com/arpanbiswas52/PaperNotebooks
Related papers
- Unlocking Potential Binders: Multimodal Pretraining DEL-Fusion for Denoising DNA-Encoded Libraries [51.72836644350993]
Multimodal Pretraining DEL-Fusion model (MPDF)
We develop pretraining tasks applying contrastive objectives between different compound representations and their text descriptions.
We propose a novel DEL-fusion framework that amalgamates compound information at the atomic, submolecular, and molecular levels.
arXiv Detail & Related papers (2024-09-07T17:32:21Z) - Invariant Discovery of Features Across Multiple Length Scales: Applications in Microscopy and Autonomous Materials Characterization [3.386918190302773]
Variational Autoencoders (VAEs) have emerged as powerful tools for identifying underlying factors of variation in image data.
We introduce the scale-invariant VAE approach (SI-VAE) based on the progressive training of the VAE with the descriptors sampled at different length scales.
arXiv Detail & Related papers (2024-08-01T01:48:46Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - A Multi-scale Information Integration Framework for Infrared and Visible
Image Fusion [50.84746752058516]
Infrared and visible image fusion aims at generating a fused image containing intensity and detail information of source images.
Existing methods mostly adopt a simple weight in the loss function to decide the information retention of each modality.
We propose a multi-scale dual attention (MDA) framework for infrared and visible image fusion.
arXiv Detail & Related papers (2023-12-07T14:40:05Z) - Instance Segmentation of Dislocations in TEM Images [0.0]
In materials science, the knowledge about the location and movement of dislocations is important for creating novel materials with superior properties.
In this work, we quantitatively compare state-of-the-art instance segmentation methods, including Mask R-CNN and YOLOv8.
The dislocation masks as the results of the instance segmentation are converted to mathematical lines, enabling quantitative analysis of dislocation length and geometry.
arXiv Detail & Related papers (2023-09-07T06:17:31Z) - Deep Learning of Crystalline Defects from TEM images: A Solution for the
Problem of "Never Enough Training Data" [0.0]
In-situ TEM experiments can provide important insights into how dislocations behave and move.
The analysis of individual video frames can provide useful insights but is limited by the capabilities of automated identification.
In this work, a parametric model for generating synthetic training data for segmentation of dislocations is developed.
arXiv Detail & Related papers (2023-07-12T17:37:46Z) - Physics and Chemistry from Parsimonious Representations: Image Analysis
via Invariant Variational Autoencoders [0.0]
Variational autoencoders (VAEs) are emerging as a powerful paradigm for the unsupervised data analysis.
This article summarizes recent developments in VAEs, covering the basic principles and intuition behind the VAEs.
arXiv Detail & Related papers (2023-03-30T03:16:27Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - Synthetic Image Rendering Solves Annotation Problem in Deep Learning
Nanoparticle Segmentation [5.927116192179681]
We show that using a rendering software allows to generate realistic, synthetic training data to train a state-of-the art deep neural network.
We derive a segmentation accuracy that is comparable to man-made annotations for toxicologically relevant metal-oxide nanoparticles ensembles.
arXiv Detail & Related papers (2020-11-20T17:05:36Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Data-Driven Discovery of Molecular Photoswitches with Multioutput
Gaussian Processes [51.17758371472664]
Photoswitchable molecules display two or more isomeric forms that may be accessed using light.
We present a data-driven discovery pipeline for molecular photoswitches underpinned by dataset curation and multitask learning.
We validate our proposed approach experimentally by screening a library of commercially available photoswitchable molecules.
arXiv Detail & Related papers (2020-06-28T20:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.