Agent-based Learning of Materials Datasets from Scientific Literature
- URL: http://arxiv.org/abs/2312.11690v1
- Date: Mon, 18 Dec 2023 20:29:58 GMT
- Title: Agent-based Learning of Materials Datasets from Scientific Literature
- Authors: Mehrad Ansari and Seyed Mohamad Moosavi
- Abstract summary: We develop a chemist AI agent, powered by large language models (LLMs), to create structured datasets from natural language text.
Our chemist AI agent, Eunomia, can plan and execute actions by leveraging the existing knowledge from decades of scientific research articles.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advancements in machine learning and artificial intelligence are transforming
materials discovery. Yet, the availability of structured experimental data
remains a bottleneck. The vast corpus of scientific literature presents a
valuable and rich resource of such data. However, manual dataset creation from
these resources is challenging due to issues in maintaining quality and
consistency, scalability limitations, and the risk of human error and bias.
Therefore, in this work, we develop a chemist AI agent, powered by large
language models (LLMs), to overcome these challenges by autonomously creating
structured datasets from natural language text, ranging from sentences and
paragraphs to extensive scientific research articles. Our chemist AI agent,
Eunomia, can plan and execute actions by leveraging the existing knowledge from
decades of scientific research articles, scientists, the Internet and other
tools altogether. We benchmark the performance of our approach in three
different information extraction tasks with various levels of complexity,
including solid-state impurity doping, metal-organic framework (MOF) chemical
formula, and property relations. Our results demonstrate that our zero-shot
agent, with the appropriate tools, is capable of attaining performance that is
either superior or comparable to the state-of-the-art fine-tuned materials
information extraction methods. This approach simplifies compilation of machine
learning-ready datasets for various materials discovery applications, and
significantly ease the accessibility of advanced natural language processing
tools for novice users in natural language. The methodology in this work is
developed as an open-source software on https://github.com/AI4ChemS/Eunomia.
Related papers
- Probing the limitations of multimodal language models for chemistry and materials research [3.422786943576035]
We introduce MaCBench, a benchmark for evaluating how vision-language models handle real-world chemistry and materials science tasks.
We find that while these systems show promising capabilities in basic perception tasks, they exhibit fundamental limitations in spatial reasoning, cross-modal information synthesis, and logical inference.
Our insights have important implications beyond chemistry and materials science, suggesting that developing reliable multimodal AI scientific assistants may require advances in curating suitable training data and approaches to training those models.
arXiv Detail & Related papers (2024-11-25T21:51:45Z) - Human-artificial intelligence teaming for scientific information extraction from data-driven additive manufacturing research using large language models [3.0061386772253784]
Data-driven research in Additive Manufacturing (AM) has gained significant success in recent years.
This has led to a plethora of scientific literature to emerge.
It requires substantial effort and time to extract scientific information from these works.
We propose a framework that enables collaboration between AM and AI experts to continuously extract scientific information from data-driven AM literature.
arXiv Detail & Related papers (2024-07-26T15:43:52Z) - From Text to Insight: Large Language Models for Materials Science Data Extraction [4.08853418443192]
The vast majority of materials science knowledge exists in unstructured natural language.
Structured data is crucial for innovative and systematic materials design.
The advent of large language models (LLMs) represents a significant shift.
arXiv Detail & Related papers (2024-07-23T22:23:47Z) - MMSci: A Dataset for Graduate-Level Multi-Discipline Multimodal Scientific Understanding [59.41495657570397]
This dataset includes figures such as schematic diagrams, simulated images, macroscopic/microscopic photos, and experimental visualizations.
We developed benchmarks for scientific figure captioning and multiple-choice questions, evaluating six proprietary and over ten open-source models.
The dataset and benchmarks will be released to support further research.
arXiv Detail & Related papers (2024-07-06T00:40:53Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - EndToEndML: An Open-Source End-to-End Pipeline for Machine Learning Applications [0.2826977330147589]
We propose a web-based end-to-end pipeline that is capable of preprocessing, training, evaluating, and visualizing machine learning models.
Our library assists in recognizing, classifying, clustering, and predicting a wide range of multi-modal, multi-sensor datasets.
arXiv Detail & Related papers (2024-03-27T02:24:38Z) - An Autonomous Large Language Model Agent for Chemical Literature Data
Mining [60.85177362167166]
We introduce an end-to-end AI agent framework capable of high-fidelity extraction from extensive chemical literature.
Our framework's efficacy is evaluated using accuracy, recall, and F1 score of reaction condition data.
arXiv Detail & Related papers (2024-02-20T13:21:46Z) - Text2Data: Low-Resource Data Generation with Textual Control [104.38011760992637]
Natural language serves as a common and straightforward control signal for humans to interact seamlessly with machines.
We propose Text2Data, a novel approach that utilizes unlabeled data to understand the underlying data distribution through an unsupervised diffusion model.
It undergoes controllable finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting.
arXiv Detail & Related papers (2024-02-08T03:41:39Z) - Large Language Models as Master Key: Unlocking the Secrets of Materials
Science with GPT [9.33544942080883]
This article presents a new natural language processing (NLP) task called structured information inference (SII) to address the complexities of information extraction at the device level in materials science.
We accomplished this task by tuning GPT-3 on an existing perovskite solar cell FAIR dataset with 91.8% F1-score and extended the dataset with data published since its release.
We also designed experiments to predict the electrical performance of solar cells and design materials or devices with targeted parameters using large language models (LLMs)
arXiv Detail & Related papers (2023-04-05T04:01:52Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Machine Learning in Nano-Scale Biomedical Engineering [77.75587007080894]
We review the existing research regarding the use of machine learning in nano-scale biomedical engineering.
The main challenges that can be formulated as ML problems are classified into the three main categories.
For each of the presented methodologies, special emphasis is given to its principles, applications, and limitations.
arXiv Detail & Related papers (2020-08-05T15:45:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.