Active learning in open experimental environments: selecting the right
information channel(s) based on predictability in deep kernel learning
- URL: http://arxiv.org/abs/2203.10181v1
- Date: Fri, 18 Mar 2022 22:36:40 GMT
- Title: Active learning in open experimental environments: selecting the right
information channel(s) based on predictability in deep kernel learning
- Authors: Maxim Ziatdinov, Yongtao Liu, Sergei V. Kalinin
- Abstract summary: Key tasks in experimental studies is establishing which of these channels is predictive of the behaviors of interest.
Here we explore the problem of discovery of the optimal predictive channel for structure-property relationships in microscopy.
This approach can be directly applicable to similar active learning tasks in automated synthesis and the discovery of quantitative structure-activity relations in molecular systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning methods are rapidly becoming the integral component of
automated experiment workflows in imaging, materials synthesis, and
computation. The distinctive aspect of many experimental scenarios is the
presence of multiple information channels, including both the intrinsic
modalities of the measurement system and the exogenous environment and noise
signals. One of the key tasks in experimental studies is hence establishing
which of these channels is predictive of the behaviors of interest. Here we
explore the problem of discovery of the optimal predictive channel for
structure-property relationships (in microscopy) using deep kernel learning for
modality selection in an active experiment setting. We further pose that this
approach can be directly applicable to similar active learning tasks in
automated synthesis and the discovery of quantitative structure-activity
relations in molecular systems.
Related papers
- A dynamic Bayesian optimized active recommender system for
curiosity-driven Human-in-the-loop automated experiments [8.780395483188242]
We present the development of a new type of human in the loop experimental workflow, via a Bayesian optimized active recommender system (BOARS)
This work shows the utility of human-augmented machine learning approaches for curiosity-driven exploration of systems across experimental domains.
arXiv Detail & Related papers (2023-04-05T14:54:34Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - Bayesian Active Learning for Scanning Probe Microscopy: from Gaussian
Processes to Hypothesis Learning [0.0]
We discuss the basic principles of Bayesian active learning and illustrate its applications for scanning probe microscopes (SPMs)
These frameworks allow for the use of prior data, the discovery of specific functionalities as encoded in spectral data, and exploration of physical laws manifesting during the experiment.
arXiv Detail & Related papers (2022-05-30T23:01:41Z) - Benchmarking Active Learning Strategies for Materials Optimization and
Discovery [17.8738267360992]
We present a reference dataset to benchmark active learning strategies in the form of various acquisition functions.
We discuss the relationship between algorithm performance, materials search space, complexity, and the incorporation of prior knowledge.
arXiv Detail & Related papers (2022-04-12T14:27:33Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Ensemble learning and iterative training (ELIT) machine learning:
applications towards uncertainty quantification and automated experiment in
atom-resolved microscopy [0.0]
Deep learning has emerged as a technique of choice for rapid feature extraction across imaging disciplines.
Here we explore the application of deep learning for feature extraction in atom-resolved electron microscopy.
This approach both allows uncertainty into deep learning analysis and also enables automated experimental detection where retraining of network to compensate for out-of-distribution drift due to change in imaging conditions is substituted for a human operator or programmatic selection of networks from the ensemble.
arXiv Detail & Related papers (2021-01-21T05:29:26Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - Identifying Learning Rules From Neural Network Observables [26.96375335939315]
We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes.
Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities, may provide a good basis on which to identify learning rules.
arXiv Detail & Related papers (2020-10-22T14:36:54Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.