Virtual Scientific Companion for Synchrotron Beamlines: A Prototype
- URL: http://arxiv.org/abs/2312.17180v1
- Date: Thu, 28 Dec 2023 18:12:42 GMT
- Title: Virtual Scientific Companion for Synchrotron Beamlines: A Prototype
- Authors: Daniel Potemkin, Carlos Soto, Ruipeng Li, Kevin Yager, and Esther Tsai
- Abstract summary: We introduce the prototype of virtual scientific companion (VISION)
It is possible to control basic beamline operations through natural language with open-source language model and the limited computational resources at beamline.
- Score: 1.836557889514696
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The extraordinarily high X-ray flux and specialized instrumentation at
synchrotron beamlines have enabled versatile in-situ and high throughput
studies that are impossible elsewhere. Dexterous and efficient control of
experiments are thus crucial for efficient beamline operation. Artificial
intelligence and machine learning methods are constantly being developed to
enhance facility performance, but the full potential of these developments can
only be reached with efficient human-computer-interaction. Natural language is
the most intuitive and efficient way for humans to communicate. However, the
low credibility and reproducibility of existing large language models and tools
demand extensive development to be made for robust and reliable performance for
scientific purposes. In this work, we introduce the prototype of virtual
scientific companion (VISION) and demonstrate that it is possible to control
basic beamline operations through natural language with open-source language
model and the limited computational resources at beamline. The human-AI nature
of VISION leverages existing automation systems and data framework at
synchrotron beamlines.
Related papers
- Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics [50.191655141020505]
We introduce a novel framework for learning world models.
By providing a scalable and robust framework, we pave the way for adaptive and efficient robotic systems in real-world applications.
arXiv Detail & Related papers (2025-01-17T10:39:09Z) - VISION: A Modular AI Assistant for Natural Human-Instrument Interaction at Scientific User Facilities [0.19736111241221438]
generative AI presents an opportunity to bridge this knowledge gap.
We present a modular architecture for the Virtual Scientific Companion (VISION)
With VISION, we performed LLM-based operation on the beamline workstation with low latency and demonstrated the first voice-controlled experiment at an X-ray scattering beamline.
arXiv Detail & Related papers (2024-12-24T04:37:07Z) - Darkit: A User-Friendly Software Toolkit for Spiking Large Language Model [50.37090759139591]
Large language models (LLMs) have been widely applied in various practical applications, typically comprising billions of parameters.
The human brain, employing bio-plausible spiking mechanisms, can accomplish the same tasks while significantly reducing energy consumption.
We are releasing a software toolkit named DarwinKit (Darkit) to accelerate the adoption of brain-inspired large language models.
arXiv Detail & Related papers (2024-12-20T07:50:08Z) - MatPilot: an LLM-enabled AI Materials Scientist under the Framework of Human-Machine Collaboration [13.689620109856783]
We developed an AI materials scientist named MatPilot, which has shown encouraging abilities in the discovery of new materials.
The core strength of MatPilot is its natural language interactive human-machine collaboration.
MatPilot integrates unique cognitive abilities, extensive accumulated experience, and ongoing curiosity of human-beings.
arXiv Detail & Related papers (2024-11-10T12:23:44Z) - DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - Engineering A Large Language Model From Scratch [0.0]
Atinuke is a Transformer-based neural network that optimises performance across various language tasks.
It can emulate human-like language by extracting features and learning complex mappings.
System achieves state-of-the-art results on natural language tasks whilst remaining interpretable and robust.
arXiv Detail & Related papers (2024-01-30T04:29:48Z) - Accelerating Reinforcement Learning of Robotic Manipulations via
Feedback from Large Language Models [21.052532074815765]
We introduce the Lafite-RL (Language agent feedback interactive Reinforcement Learning) framework.
It enables RL agents to learn robotic tasks efficiently by taking advantage of Large Language Models' timely feedback.
It outperforms the baseline in terms of both learning efficiency and success rate.
arXiv Detail & Related papers (2023-11-04T11:21:38Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Lemur: Harmonizing Natural Language and Code for Language Agents [105.43564788499901]
We introduce Lemur and Lemur-Chat, open-source language models optimized for both natural language and coding capabilities.
Our models achieve state-of-the-art averaged performance across diverse text and coding benchmarks.
The harmonization between natural and programming languages enables Lemur-Chat to significantly narrow the gap with proprietary models on agent abilities.
arXiv Detail & Related papers (2023-10-10T17:57:45Z) - What Matters in Language Conditioned Robotic Imitation Learning [26.92329260907805]
We study the most critical challenges in learning language conditioned policies from offline free-form imitation datasets.
We present a novel approach that significantly outperforms the state of the art on the challenging language conditioned long-horizon robot manipulation CALVIN benchmark.
arXiv Detail & Related papers (2022-04-13T08:45:32Z) - GenNI: Human-AI Collaboration for Data-Backed Text Generation [102.08127062293111]
Table2Text systems generate textual output based on structured data utilizing machine learning.
GenNI (Generation Negotiation Interface) is an interactive visual system for high-level human-AI collaboration in producing descriptive text.
arXiv Detail & Related papers (2021-10-19T18:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.