14 Examples of How LLMs Can Transform Materials Science and Chemistry: A
Reflection on a Large Language Model Hackathon
- URL: http://arxiv.org/abs/2306.06283v4
- Date: Fri, 14 Jul 2023 13:24:43 GMT
- Title: 14 Examples of How LLMs Can Transform Materials Science and Chemistry: A
Reflection on a Large Language Model Hackathon
- Authors: Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti
Badhwar, Joshua D. Bocarsly, Andres M Bran, Stefan Bringuier, L. Catherine
Brinson, Kamal Choudhary, Defne Circi, Sam Cox, Wibe A. de Jong, Matthew L.
Evans, Nicolas Gastellu, Jerome Genzling, Mar\'ia Victoria Gil, Ankur K.
Gupta, Zhi Hong, Alishba Imran, Sabine Kruschwitz, Anne Labarre, Jakub
L\'ala, Tao Liu, Steven Ma, Sauradeep Majumdar, Garrett W. Merz, Nicolas
Moitessier, Elias Moubarak, Beatriz Mouri\~no, Brenden Pelkie, Michael
Pieler, Mayk Caldas Ramos, Bojana Rankovi\'c, Samuel G. Rodriques, Jacob N.
Sanders, Philippe Schwaller, Marcus Schwarting, Jiale Shi, Berend Smit, Ben
E. Smith, Joren Van Herck, Christoph V\"olker, Logan Ward, Sean Warren,
Benjamin Weiser, Sylvester Zhang, Xiaoqi Zhang, Ghezal Ahmad Zia, Aristana
Scourtas, KJ Schmidt, Ian Foster, Andrew D. White, Ben Blaiszik
- Abstract summary: Large-language models (LLMs) could be useful in chemistry and materials science.
To explore these possibilities, we organized a hackathon.
This article chronicles the projects built as part of the hackathon.
- Score: 30.978561315637307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-language models (LLMs) such as GPT-4 caught the interest of many
scientists. Recent studies suggested that these models could be useful in
chemistry and materials science. To explore these possibilities, we organized a
hackathon.
This article chronicles the projects built as part of this hackathon.
Participants employed LLMs for various applications, including predicting
properties of molecules and materials, designing novel interfaces for tools,
extracting knowledge from unstructured data, and developing new educational
applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines.
Related papers
- Reflections from the 2024 Large Language Model (LLM) Hackathon for Applications in Materials Science and Chemistry [68.72590517877455]
We present the outcomes from the second Large Language Model (LLM) Hackathon for Applications in Materials Science and Chemistry.
The event engaged participants across global hybrid locations, resulting in 34 team submissions.
The submissions spanned seven key application areas and demonstrated the diverse utility of LLMs for applications.
arXiv Detail & Related papers (2024-11-20T23:08:01Z) - ChemEval: A Comprehensive Multi-Level Chemical Evaluation for Large Language Models [62.37850540570268]
Existing benchmarks in this domain fail to adequately meet the specific requirements of chemical research professionals.
ChemEval identifies 4 crucial progressive levels in chemistry, assessing 12 dimensions of LLMs across 42 distinct chemical tasks.
Results show that while general LLMs excel in literature understanding and instruction following, they fall short in tasks demanding advanced chemical knowledge.
arXiv Detail & Related papers (2024-09-21T02:50:43Z) - HoneyComb: A Flexible LLM-Based Agent System for Materials Science [31.173615509567885]
HoneyComb is the first large language model system specifically designed for materials science.
MatSciKB is a curated, structured knowledge collection based on reliable literature.
ToolHub employs an Inductive Tool Construction method to generate, decompose, and refine API tools for materials science.
arXiv Detail & Related papers (2024-08-29T15:38:40Z) - Many-Shot In-Context Learning for Molecular Inverse Design [56.65345962071059]
Large Language Models (LLMs) have demonstrated great performance in few-shot In-Context Learning (ICL)
We develop a new semi-supervised learning method that overcomes the lack of experimental data available for many-shot ICL.
As we show, the new method greatly improves upon existing ICL methods for molecular design while being accessible and easy to use for scientists.
arXiv Detail & Related papers (2024-07-26T21:10:50Z) - A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery [68.48094108571432]
Large language models (LLMs) have revolutionized the way text and other modalities of data are handled.
We aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs.
arXiv Detail & Related papers (2024-06-16T08:03:24Z) - Materials science in the era of large language models: a perspective [0.0]
Large Language Models (LLMs) have garnered considerable interest due to their impressive capabilities.
This paper argues their ability to handle ambiguous requirements across a range of tasks and disciplines mean they could be a powerful tool to aid researchers.
arXiv Detail & Related papers (2024-03-11T17:34:25Z) - Are LLMs Ready for Real-World Materials Discovery? [10.87312197950899]
Large Language Models (LLMs) create exciting possibilities for powerful language processing tools to accelerate research in materials science.
While LLMs have great potential to accelerate materials understanding and discovery, they currently fall short in being practical materials science tools.
We show relevant failure cases of LLMs in materials science that reveal current limitations of LLMs related to comprehending and reasoning over complex, interconnected materials science knowledge.
arXiv Detail & Related papers (2024-02-07T19:10:36Z) - Scientific Large Language Models: A Survey on Biological & Chemical Domains [47.97810890521825]
Large Language Models (LLMs) have emerged as a transformative power in enhancing natural language comprehension.
The application of LLMs extends beyond conventional linguistic boundaries, encompassing specialized linguistic systems developed within various scientific disciplines.
As a burgeoning area in the community of AI for Science, scientific LLMs warrant comprehensive exploration.
arXiv Detail & Related papers (2024-01-26T05:33:34Z) - Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective [53.300288393173204]
Large Language Models (LLMs) have shown remarkable performance in various cross-modal tasks.
In this work, we propose an In-context Few-Shot Molecule Learning paradigm for molecule-caption translation.
We evaluate the effectiveness of MolReGPT on molecule-caption translation, including molecule understanding and text-based molecule generation.
arXiv Detail & Related papers (2023-06-11T08:16:25Z) - What can Large Language Models do in chemistry? A comprehensive
benchmark on eight tasks [41.9830989458936]
Large Language Models (LLMs) with strong abilities in natural language processing tasks have emerged.
We aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain.
arXiv Detail & Related papers (2023-05-27T14:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.