Towards Augmented Microscopy with Reinforcement Learning-Enhanced
Workflows
- URL: http://arxiv.org/abs/2208.02865v1
- Date: Thu, 4 Aug 2022 20:02:21 GMT
- Title: Towards Augmented Microscopy with Reinforcement Learning-Enhanced
Workflows
- Authors: Michael Xu, Abinash Kumar, and James M. LeBeau
- Abstract summary: We develop a virtual environment to test and develop a network to autonomously align the electron beam without prior knowledge.
We deploy a successful model on the microscope to validate the approach and demonstrate the value of designing appropriate virtual environments.
Overall, the results highlight that by taking advantage of RL, microscope operations can be automated without the need for extensive algorithm design.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Here, we report a case study implementation of reinforcement learning (RL) to
automate operations in the scanning transmission electron microscopy (STEM)
workflow. To do so, we design a virtual, prototypical RL environment to test
and develop a network to autonomously align the electron beam without prior
knowledge. Using this simulator, we evaluate the impact of environment design
and algorithm hyperparameters on alignment accuracy and learning convergence,
showing robust convergence across a wide hyperparameter space. Additionally, we
deploy a successful model on the microscope to validate the approach and
demonstrate the value of designing appropriate virtual environments. Consistent
with simulated results, the on-microscope RL model achieves convergence to the
goal alignment after minimal training. Overall, the results highlight that by
taking advantage of RL, microscope operations can be automated without the need
for extensive algorithm design, taking another step towards augmenting electron
microscopy with machine learning methods.
Related papers
- Self-Supervised Learning with Generative Adversarial Networks for Electron Microscopy [0.0]
We show how self-supervised pretraining facilitates efficient fine-tuning for a spectrum of downstream tasks.
We demonstrate the versatility of self-supervised pretraining across various downstream tasks in the context of electron microscopy.
arXiv Detail & Related papers (2024-02-28T12:25:01Z) - Closing the loop: Autonomous experiments enabled by
machine-learning-based online data analysis in synchrotron beamline
environments [80.49514665620008]
Machine learning can be used to enhance research involving large or rapidly generated datasets.
In this study, we describe the incorporation of ML into a closed-loop workflow for X-ray reflectometry (XRR)
We present solutions that provide an elementary data analysis in real time during the experiment without introducing the additional software dependencies in the beamline control software environment.
arXiv Detail & Related papers (2023-06-20T21:21:19Z) - Deep Learning for Automated Experimentation in Scanning Transmission
Electron Microscopy [0.0]
Machine learning (ML) has become critical for post-acquisition data analysis in () transmission electron microscopy,scanning (S)TEM, imaging and spectroscopy.
We discuss the associated challenges with the transition to active ML, including sequential data analysis and out-of-distribution drift effects.
These considerations will collectively inform the operationalization of ML in next-generation experimentation.
arXiv Detail & Related papers (2023-04-04T18:01:56Z) - Leveraging generative adversarial networks to create realistic scanning
transmission electron microscopy images [2.5954872177280346]
Machine learning could revolutionize materials research through autonomous data collection and processing.
We employ a cycle generative adversarial network (CycleGAN) with a reciprocal space discriminator to augment simulated data with realistic spatial frequency information.
We showcase our approach by training a fully convolutional network (FCN) to identify single atom defects in a 4.5 million atom data set.
arXiv Detail & Related papers (2023-01-18T19:19:27Z) - SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering [49.78647219715034]
We propose a sensing-aware model-based reinforcement learning system called SAM-RL.
With the sensing-aware learning pipeline, SAM-RL allows a robot to select an informative viewpoint to monitor the task process.
We apply our framework to real world experiments for accomplishing three manipulation tasks: robotic assembly, tool manipulation, and deformable object manipulation.
arXiv Detail & Related papers (2022-10-27T05:30:43Z) - Microscopy is All You Need [0.0]
We argue that a promising pathway for the development of machine learning methods is via the route of domain-specific deployable algorithms.
This will benefit both fundamental physical studies and serve as a test bed for more complex autonomous systems such as robotics and manufacturing.
arXiv Detail & Related papers (2022-10-12T18:41:40Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Global Voxel Transformer Networks for Augmented Microscopy [54.730707387866076]
We introduce global voxel transformer networks (GVTNets), an advanced deep learning tool for augmented microscopy.
GVTNets are built on global voxel transformer operators (GVTOs), which are able to aggregate global information.
We apply the proposed methods on existing datasets for three different augmented microscopy tasks under various settings.
arXiv Detail & Related papers (2020-08-05T20:11:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.