ChemCrow: Augmenting large-language models with chemistry tools
- URL: http://arxiv.org/abs/2304.05376v5
- Date: Mon, 2 Oct 2023 17:03:01 GMT
- Title: ChemCrow: Augmenting large-language models with chemistry tools
- Authors: Andres M Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D
White, Philippe Schwaller
- Abstract summary: Large-language models (LLMs) have shown strong performance in tasks across domains, but struggle with chemistry-related problems.
In this study, we introduce ChemCrow, an LLM chemistry agent designed to accomplish tasks across organic synthesis, drug discovery, and materials design.
Our agent autonomously planned and executed the syntheses of an insect repellent, three organocatalysts, and guided the discovery of a novel chromophore.
- Score: 0.9195187117013247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last decades, excellent computational chemistry tools have been
developed. Integrating them into a single platform with enhanced accessibility
could help reaching their full potential by overcoming steep learning curves.
Recently, large-language models (LLMs) have shown strong performance in tasks
across domains, but struggle with chemistry-related problems. Moreover, these
models lack access to external knowledge sources, limiting their usefulness in
scientific applications. In this study, we introduce ChemCrow, an LLM chemistry
agent designed to accomplish tasks across organic synthesis, drug discovery,
and materials design. By integrating 18 expert-designed tools, ChemCrow
augments the LLM performance in chemistry, and new capabilities emerge. Our
agent autonomously planned and executed the syntheses of an insect repellent,
three organocatalysts, and guided the discovery of a novel chromophore. Our
evaluation, including both LLM and expert assessments, demonstrates ChemCrow's
effectiveness in automating a diverse set of chemical tasks. Surprisingly, we
find that GPT-4 as an evaluator cannot distinguish between clearly wrong GPT-4
completions and Chemcrow's performance. Our work not only aids expert chemists
and lowers barriers for non-experts, but also fosters scientific advancement by
bridging the gap between experimental and computational chemistry.
Related papers
- ChemEval: A Comprehensive Multi-Level Chemical Evaluation for Large Language Models [62.37850540570268]
Existing benchmarks in this domain fail to adequately meet the specific requirements of chemical research professionals.
ChemEval identifies 4 crucial progressive levels in chemistry, assessing 12 dimensions of LLMs across 42 distinct chemical tasks.
Results show that while general LLMs excel in literature understanding and instruction following, they fall short in tasks demanding advanced chemical knowledge.
arXiv Detail & Related papers (2024-09-21T02:50:43Z) - BatGPT-Chem: A Foundation Large Model For Retrosynthesis Prediction [65.93303145891628]
BatGPT-Chem is a large language model with 15 billion parameters, tailored for enhanced retrosynthesis prediction.
Our model captures a broad spectrum of chemical knowledge, enabling precise prediction of reaction conditions.
This development empowers chemists to adeptly address novel compounds, potentially expediting the innovation cycle in drug manufacturing and materials science.
arXiv Detail & Related papers (2024-08-19T05:17:40Z) - ChemVLM: Exploring the Power of Multimodal Large Language Models in Chemistry Area [50.15254966969718]
We introduce textbfChemVLM, an open-source chemical multimodal large language model for chemical applications.
ChemVLM is trained on a carefully curated bilingual dataset that enhances its ability to understand both textual and visual chemical information.
We benchmark ChemVLM against a range of open-source and proprietary multimodal large language models on various tasks.
arXiv Detail & Related papers (2024-08-14T01:16:40Z) - A Review of Large Language Models and Autonomous Agents in Chemistry [0.7184549921674758]
Large language models (LLMs) have emerged as powerful tools in chemistry.
This review highlights LLM capabilities in chemistry and their potential to accelerate scientific discovery through automation.
As agents are an emerging topic, we extend the scope of our review of agents beyond chemistry.
arXiv Detail & Related papers (2024-06-26T17:33:21Z) - Are large language models superhuman chemists? [4.87961182129702]
Large language models (LLMs) have gained widespread interest due to their ability to process human language and perform tasks on which they have not been explicitly trained.
Here, we introduce "ChemBench," an automated framework for evaluating the chemical knowledge and reasoning abilities of state-of-the-art LLMs.
We curated more than 2,700 question-answer pairs, evaluated leading open- and closed-source LLMs, and found that the best models outperformed the best human chemists.
arXiv Detail & Related papers (2024-04-01T20:56:25Z) - LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset [13.063678216852473]
We show that large language models (LLMs) can achieve very strong results on a comprehensive set of chemistry tasks.
We propose SMolInstruct, a large-scale, comprehensive, and high-quality dataset for instruction tuning.
Using SMolInstruct, we fine-tune a set of open-source LLMs, among which, we find that Mistral serves as the best base model for chemistry tasks.
arXiv Detail & Related papers (2024-02-14T18:42:25Z) - ChemLLM: A Chemical Large Language Model [49.308528569982805]
Large language models (LLMs) have made impressive progress in chemistry applications.
However, the community lacks an LLM specifically designed for chemistry.
Here, we introduce ChemLLM, a comprehensive framework that features the first LLM dedicated to chemistry.
arXiv Detail & Related papers (2024-02-10T01:11:59Z) - Structured Chemistry Reasoning with Large Language Models [70.13959639460015]
Large Language Models (LLMs) excel in diverse areas, yet struggle with complex scientific reasoning, especially in chemistry.
We introduce StructChem, a simple yet effective prompting strategy that offers the desired guidance and substantially boosts the LLMs' chemical reasoning capability.
Tests across four chemistry areas -- quantum chemistry, mechanics, physical chemistry, and kinetics -- StructChem substantially enhances GPT-4's performance, with up to 30% peak improvement.
arXiv Detail & Related papers (2023-11-16T08:20:36Z) - Chemist-X: Large Language Model-empowered Agent for Reaction Condition Recommendation in Chemical Synthesis [57.70772230913099]
Chemist-X automates the reaction condition recommendation (RCR) task in chemical synthesis with retrieval-augmented generation (RAG) technology.
Chemist-X interrogates online molecular databases and distills critical data from the latest literature database.
Chemist-X considerably reduces chemists' workload and allows them to focus on more fundamental and creative problems.
arXiv Detail & Related papers (2023-11-16T01:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.