Physics and Chemistry from Parsimonious Representations: Image Analysis
via Invariant Variational Autoencoders
- URL: http://arxiv.org/abs/2303.18236v1
- Date: Thu, 30 Mar 2023 03:16:27 GMT
- Title: Physics and Chemistry from Parsimonious Representations: Image Analysis
via Invariant Variational Autoencoders
- Authors: Mani Valleti, Yongtao Liu, Sergei Kalinin
- Abstract summary: Variational autoencoders (VAEs) are emerging as a powerful paradigm for the unsupervised data analysis.
This article summarizes recent developments in VAEs, covering the basic principles and intuition behind the VAEs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electron, optical, and scanning probe microscopy methods are generating ever
increasing volume of image data containing information on atomic and mesoscale
structures and functionalities. This necessitates the development of the
machine learning methods for discovery of physical and chemical phenomena from
the data, such as manifestations of symmetry breaking in electron and scanning
tunneling microscopy images, variability of the nanoparticles. Variational
autoencoders (VAEs) are emerging as a powerful paradigm for the unsupervised
data analysis, allowing to disentangle the factors of variability and discover
optimal parsimonious representation. Here, we summarize recent developments in
VAEs, covering the basic principles and intuition behind the VAEs. The
invariant VAEs are introduced as an approach to accommodate scale and
translation invariances present in imaging data and separate known factors of
variations from the ones to be discovered. We further describe the
opportunities enabled by the control over VAE architecture, including
conditional, semi-supervised, and joint VAEs. Several case studies of VAE
applications for toy models and experimental data sets in Scanning Transmission
Electron Microscopy are discussed, emphasizing the deep connection between VAE
and basic physical principles. All the codes used here are available at
https://github.com/saimani5/VAE-tutorials and this article can be used as an
application guide when applying these to own data sets.
Related papers
- Invariant Discovery of Features Across Multiple Length Scales: Applications in Microscopy and Autonomous Materials Characterization [3.386918190302773]
Variational Autoencoders (VAEs) have emerged as powerful tools for identifying underlying factors of variation in image data.
We introduce the scale-invariant VAE approach (SI-VAE) based on the progressive training of the VAE with the descriptors sampled at different length scales.
arXiv Detail & Related papers (2024-08-01T01:48:46Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Learning and Controlling Silicon Dopant Transitions in Graphene using
Scanning Transmission Electron Microscopy [58.51812955462815]
We introduce a machine learning approach to determine the transition dynamics of silicon atoms on a single layer of carbon atoms.
The data samples are processed and filtered to produce symbolic representations, which we use to train a neural network to predict transition probabilities.
These learned transition dynamics are then leveraged to guide a single silicon atom throughout the lattice to pre-determined target destinations.
arXiv Detail & Related papers (2023-11-21T21:51:00Z) - Interpretable Joint Event-Particle Reconstruction for Neutrino Physics
at NOvA with Sparse CNNs and Transformers [124.29621071934693]
We present a novel neural network architecture that combines the spatial learning enabled by convolutions with the contextual learning enabled by attention.
TransformerCVN simultaneously classifies each event and reconstructs every individual particle's identity.
This architecture enables us to perform several interpretability studies which provide insights into the network's predictions.
arXiv Detail & Related papers (2023-03-10T20:36:23Z) - Combining Variational Autoencoders and Physical Bias for Improved
Microscopy Data Analysis [0.0]
We present a physics augmented machine learning method which disentangles factors of variability within the data.
Our method is applied to various materials, including NiO-LSMO, BiFeO3, and graphene.
The results demonstrate the effectiveness of our approach in extracting meaningful information from large volumes of imaging data.
arXiv Detail & Related papers (2023-02-08T17:35:38Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Semi-supervised learning of images with strong rotational disorder:
assembling nanoparticle libraries [0.0]
In most cases, experimental data streams contain images having arbitrary rotations and translations within the image.
We develop an approach that allows generalizing from a small subset of labeled data to a large unlabeled dataset.
arXiv Detail & Related papers (2021-05-24T18:01:57Z) - AtomAI: A Deep Learning Framework for Analysis of Image and Spectroscopy
Data in (Scanning) Transmission Electron Microscopy and Beyond [0.0]
AtomAI is an open-source software package bridging instrument-specific Python libraries, deep learning, and simulation tools into a single ecosystem.
AtomAI allows direct applications of the deep convolutional neural networks for atomic and mesoscopic image segmentation.
AtomAI provides utilities for mapping structure-property relationships via im2spec and spec2im type of encoder-decoder models.
arXiv Detail & Related papers (2021-05-16T17:44:59Z) - Robust Feature Disentanglement in Imaging Data via Joint Invariant
Variational Autoencoders: from Cards to Atoms [0.0]
We introduce a joint rotationally (and translationally) invariant variational autoencoder (j-trVAE)
The performance of this method is validated on several synthetic data sets and extended to high-resolution imaging data of electron and scanning probe microscopy.
We show that latent space behaviors directly comport to the known physics of ferroelectric materials and quantum systems.
arXiv Detail & Related papers (2021-04-20T18:01:55Z) - Quantitative Understanding of VAE as a Non-linearly Scaled Isometric
Embedding [52.48298164494608]
Variational autoencoder (VAE) estimates the posterior parameters of latent variables corresponding to each input data.
This paper provides a quantitative understanding of VAE property through the differential geometric and information-theoretic interpretations of VAE.
arXiv Detail & Related papers (2020-07-30T02:37:46Z) - Data-Driven Discovery of Molecular Photoswitches with Multioutput
Gaussian Processes [51.17758371472664]
Photoswitchable molecules display two or more isomeric forms that may be accessed using light.
We present a data-driven discovery pipeline for molecular photoswitches underpinned by dataset curation and multitask learning.
We validate our proposed approach experimentally by screening a library of commercially available photoswitchable molecules.
arXiv Detail & Related papers (2020-06-28T20:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.