Machine learning enabling high-throughput and remote operations at
large-scale user facilities
- URL: http://arxiv.org/abs/2201.03550v1
- Date: Sun, 9 Jan 2022 17:43:03 GMT
- Title: Machine learning enabling high-throughput and remote operations at
large-scale user facilities
- Authors: Tatiana Konstantinova, Phillip M. Maffettone, Bruce Ravel, Stuart I.
Campbell, Andi M. Barbour, Daniel Olds
- Abstract summary: Machine learning (ML) methods are regularly developed to process and interpret large datasets in real-time with measurements.
We demonstrate a variety of archetypal ML models for on-the-fly analysis at multiple beamlines at the National Synchrotron Light Source II (NSLS-II)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Imaging, scattering, and spectroscopy are fundamental in understanding and
discovering new functional materials. Contemporary innovations in automation
and experimental techniques have led to these measurements being performed much
faster and with higher resolution, thus producing vast amounts of data for
analysis. These innovations are particularly pronounced at user facilities and
synchrotron light sources. Machine learning (ML) methods are regularly
developed to process and interpret large datasets in real-time with
measurements. However, there remain conceptual barriers to entry for the
facility general user community, whom often lack expertise in ML, and technical
barriers for deploying ML models. Herein, we demonstrate a variety of
archetypal ML models for on-the-fly analysis at multiple beamlines at the
National Synchrotron Light Source II (NSLS-II). We describe these examples
instructively, with a focus on integrating the models into existing
experimental workflows, such that the reader can easily include their own ML
techniques into experiments at NSLS-II or facilities with a common
infrastructure. The framework presented here shows how with little effort,
diverse ML models operate in conjunction with feedback loops via integration
into the existing Bluesky Suite for experimental orchestration and data
management.
Related papers
- Can MLLMs Guide Weakly-Supervised Temporal Action Localization Tasks? [6.7065734065794835]
We introduce a novel learning paradigm termed MLLM4WTAL.
It harnesses the potential of MLLM to offer temporal action key semantics and complete semantic priors.
It achieves this by integrating two distinct modules: Key Semantic Matching (KSM) and Complete Semantic Reconstruction (CSR)
arXiv Detail & Related papers (2024-11-13T09:37:24Z) - RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - A Single Transformer for Scalable Vision-Language Modeling [74.05173379908703]
We present SOLO, a single transformer for visiOn-Language mOdeling.
A unified single Transformer architecture, like SOLO, effectively addresses these scalability concerns in LVLMs.
In this paper, we introduce the first open-source training recipe for developing SOLO, an open-source 7B LVLM.
arXiv Detail & Related papers (2024-07-08T22:40:15Z) - Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox [46.39670209441478]
Large language models (LLMs) have exhibited exciting progress in multiple scenarios.
As an effective means to reduce memory footprint and inference cost, quantization also faces challenges in performance degradation at low bit-widths.
This work provides a comprehensive benchmark suite for this research topic, including an evaluation system, detailed analyses, and a general toolbox.
arXiv Detail & Related papers (2024-06-15T12:02:14Z) - Are You Being Tracked? Discover the Power of Zero-Shot Trajectory
Tracing with LLMs! [3.844253028598048]
This study introduces LLMTrack, a model that illustrates how LLMs can be leveraged for Zero-Shot Trajectory Recognition.
We evaluate the model using real-world datasets designed to challenge it with distinct trajectories characterized by indoor and outdoor scenarios.
arXiv Detail & Related papers (2024-03-10T12:50:35Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Human-in-the-loop: The future of Machine Learning in Automated Electron
Microscopy [0.6760163180787716]
We discuss some considerations in designing ML-based active experiments.
The likely strategy for the next several years will be human-in-the-loop automated experiments.
arXiv Detail & Related papers (2023-10-08T05:26:32Z) - Closing the loop: Autonomous experiments enabled by
machine-learning-based online data analysis in synchrotron beamline
environments [80.49514665620008]
Machine learning can be used to enhance research involving large or rapidly generated datasets.
In this study, we describe the incorporation of ML into a closed-loop workflow for X-ray reflectometry (XRR)
We present solutions that provide an elementary data analysis in real time during the experiment without introducing the additional software dependencies in the beamline control software environment.
arXiv Detail & Related papers (2023-06-20T21:21:19Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.