Leveraging Chemistry Foundation Models to Facilitate Structure Focused Retrieval Augmented Generation in Multi-Agent Workflows for Catalyst and Materials Design
- URL: http://arxiv.org/abs/2408.11793v2
- Date: Fri, 13 Dec 2024 01:11:25 GMT
- Title: Leveraging Chemistry Foundation Models to Facilitate Structure Focused Retrieval Augmented Generation in Multi-Agent Workflows for Catalyst and Materials Design
- Authors: Nathaniel H. Park, Tiffany J. Callahan, James L. Hedrick, Tim Erdmann, Sara Capponi,
- Abstract summary: We show that chemistry foundation models can serve as a basis for enabling structure-focused, semantic chemistry information retrieval.
We also show the use of chemistry foundation models in conjunction with multi-modal models such as OpenCLIP.
- Score: 0.0
- License:
- Abstract: Molecular property prediction and generative design via deep learning models has been the subject of intense research given its potential to accelerate development of new, high-performance materials. More recently, these workflows have been significantly augmented with the advent of large language models (LLMs) and systems of autonomous agents capable of utilizing pre-trained models to make predictions in the context of more complex research tasks. While effective, there is still room for substantial improvement within agentic systems on the retrieval of salient information for material design tasks. Within this context, alternative uses of predictive deep learning models, such as leveraging their latent representations to facilitate cross-modal retrieval augmented generation within agentic systems for task-specific materials design, has remained unexplored. Herein, we demonstrate that large, pre-trained chemistry foundation models can serve as a basis for enabling structure-focused, semantic chemistry information retrieval for both small-molecules, complex polymeric materials, and reactions. Additionally, we show the use of chemistry foundation models in conjunction with multi-modal models such as OpenCLIP facilitate unprecedented queries and information retrieval across multiple characterization data domains. Finally, we demonstrate the integration of these models within multi-agent systems to facilitate structure and topological-based natural language queries and information retrieval for different research tasks.
Related papers
- A Survey of Model Architectures in Information Retrieval [64.75808744228067]
We focus on two key aspects: backbone models for feature extraction and end-to-end system architectures for relevance estimation.
We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)
We conclude by discussing emerging challenges and future directions, including architectural optimizations for performance and scalability, handling of multimodal, multilingual data, and adaptation to novel application domains beyond traditional search paradigms.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - GUI Agents with Foundation Models: A Comprehensive Survey [91.97447457550703]
This survey consolidates recent research on (M)LLM-based GUI agents.
We identify key challenges and propose future research directions.
We hope this survey will inspire further advancements in the field of (M)LLM-based GUI agents.
arXiv Detail & Related papers (2024-11-07T17:28:10Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - AtomAgents: Alloy design and discovery through physics-aware multi-modal multi-agent artificial intelligence [0.0]
The proposed physics-aware generative AI platform, AtomAgents, synergizes the intelligence of large language models (LLM)
Our results enable accurate prediction of key characteristics across alloys and highlight the crucial role of solid solution alloying to steer the development of advanced metallic alloys.
arXiv Detail & Related papers (2024-07-13T22:46:02Z) - MeLM, a generative pretrained language modeling framework that solves
forward and inverse mechanics problems [0.0]
We report a flexible multi-modal mechanics language model, MeLM, applied to solve various nonlinear forward and inverse problems.
The framework is applied to various examples including bio-inspired hierarchical honeycomb design and carbon nanotube mechanics.
arXiv Detail & Related papers (2023-06-30T10:28:20Z) - Bidirectional Generation of Structure and Properties Through a Single
Molecular Foundation Model [44.60174246341653]
We present a novel multimodal molecular pre-trained model that incorporates the modalities of structure and biochemical properties.
Our proposed model pipeline of data handling and training objectives aligns the structure/property features in a common embedding space.
These contributions emerge synergistic knowledge, allowing us to tackle both multimodal and unimodal downstream tasks through a single model.
arXiv Detail & Related papers (2022-11-19T05:16:08Z) - Artificial intelligence approaches for materials-by-design of energetic
materials: state-of-the-art, challenges, and future directions [0.0]
We review advances in AI-driven materials-by-design and their applications to energetic materials.
We evaluate methods in the literature in terms of their capacity to learn from a small/limited number of data.
We suggest a few promising future research directions for EM materials-by-design, such as meta-learning, active learning, Bayesian learning, and semi-/weakly-supervised learning.
arXiv Detail & Related papers (2022-11-15T14:41:11Z) - Retrieval-based Controllable Molecule Generation [63.44583084888342]
We propose a new retrieval-based framework for controllable molecule generation.
We use a small set of molecules to steer the pre-trained generative model towards synthesizing molecules that satisfy the given design criteria.
Our approach is agnostic to the choice of generative models and requires no task-specific fine-tuning.
arXiv Detail & Related papers (2022-08-23T17:01:16Z) - Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks [53.09649785009528]
In this paper, we explore a paradigm that does not require training to obtain new models.
Similar to the birth of CNN inspired by receptive fields in the biological visual system, we propose Model Disassembling and Assembling.
For model assembling, we present the alignment padding strategy and parameter scaling strategy to construct a new model tailored for a specific task.
arXiv Detail & Related papers (2022-03-25T05:27:28Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Multimodal Graph-based Transformer Framework for Biomedical Relation
Extraction [21.858440542249934]
We introduce a novel framework that enables the model to learn multi-omnics biological information about entities (proteins) with the help of additional multi-modal cues like molecular structure.
We evaluate our proposed method on ProteinProtein Interaction task from the biomedical corpus.
arXiv Detail & Related papers (2021-07-01T16:37:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.