Generative retrieval-augmented ontologic graph and multi-agent
strategies for interpretive large language model-based materials design
- URL: http://arxiv.org/abs/2310.19998v1
- Date: Mon, 30 Oct 2023 20:31:50 GMT
- Title: Generative retrieval-augmented ontologic graph and multi-agent
strategies for interpretive large language model-based materials design
- Authors: Markus J. Buehler
- Abstract summary: Transformer neural networks show promising capabilities, in particular for uses in materials analysis, design and manufacturing.
Here we explore the use of large language models (LLMs) as a tool that can support engineering analysis of materials.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Transformer neural networks show promising capabilities, in particular for
uses in materials analysis, design and manufacturing, including their capacity
to work effectively with both human language, symbols, code, and numerical
data. Here we explore the use of large language models (LLMs) as a tool that
can support engineering analysis of materials, applied to retrieving key
information about subject areas, developing research hypotheses, discovery of
mechanistic relationships across disparate areas of knowledge, and writing and
executing simulation codes for active knowledge generation based on physical
ground truths. When used as sets of AI agents with specific features,
capabilities, and instructions, LLMs can provide powerful problem solution
strategies for applications in analysis and design problems. Our experiments
focus on using a fine-tuned model, MechGPT, developed based on training data in
the mechanics of materials domain. We first affirm how finetuning endows LLMs
with reasonable understanding of domain knowledge. However, when queried
outside the context of learned matter, LLMs can have difficulty to recall
correct information. We show how this can be addressed using
retrieval-augmented Ontological Knowledge Graph strategies that discern how the
model understands what concepts are important and how they are related.
Illustrated for a use case of relating distinct areas of knowledge - here,
music and proteins - such strategies can also provide an interpretable graph
structure with rich information at the node, edge and subgraph level. We
discuss nonlinear sampling strategies and agent-based modeling applied to
complex question answering, code generation and execution in the context of
automated force field development from actively learned Density Functional
Theory (DFT) modeling, and data analysis.
Related papers
- GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Knowledge Tagging with Large Language Model based Multi-Agent System [17.53518487546791]
This paper investigates the use of a multi-agent system to address the limitations of previous algorithms.
We highlight the significant potential of an LLM-based multi-agent system in overcoming the challenges that previous methods have encountered.
arXiv Detail & Related papers (2024-09-12T21:39:01Z) - TopoChat: Enhancing Topological Materials Retrieval With Large Language Model and Multi-Source Knowledge [4.654635844923322]
Large language models (LLMs) have demonstrated impressive performance in the text generation task.
We develop a specialized dialogue system for topological materials called TopoChat.
TopoChat exhibits superior performance in structural and property querying, material recommendation, and complex relational reasoning.
arXiv Detail & Related papers (2024-09-10T06:01:16Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Towards Next-Generation Urban Decision Support Systems through AI-Powered Construction of Scientific Ontology using Large Language Models -- A Case in Optimizing Intermodal Freight Transportation [1.6230958216521798]
This study investigates the potential of leveraging the pre-trained Large Language Models (LLMs)
By adopting ChatGPT API as the reasoning core, we outline an integrated workflow that encompasses natural language processing, methontology-based prompt tuning, and transformers.
The outcomes of our methodology are knowledge graphs in widely adopted ontology languages (e.g., OWL, RDF, SPARQL)
arXiv Detail & Related papers (2024-05-29T16:40:31Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - MechGPT, a language-based strategy for mechanics and materials modeling
that connects knowledge across scales, disciplines and modalities [0.0]
We use a Large Language Model (LLM) to distill question-answer pairs from raw sources followed by fine-tuning.
The resulting MechGPT LLM foundation model is used in a series of computational experiments to explore its capacity for knowledge retrieval, various language tasks, hypothesis generation, and connecting knowledge across disparate areas.
arXiv Detail & Related papers (2023-10-16T14:29:35Z) - Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction [104.29108668347727]
This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models.
The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies.
We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
arXiv Detail & Related papers (2023-07-03T16:01:45Z) - A Study of Situational Reasoning for Traffic Understanding [63.45021731775964]
We devise three novel text-based tasks for situational reasoning in the traffic domain.
We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work.
We provide in-depth analyses of model performance on data partitions and examine model predictions categorically.
arXiv Detail & Related papers (2023-06-05T01:01:12Z) - Exploring In-Context Learning Capabilities of Foundation Models for
Generating Knowledge Graphs from Text [3.114960935006655]
This paper aims to improve the state of the art of automatic construction and completion of knowledge graphs from text.
In this context, one emerging paradigm is in-context learning where a language model is used as it is with a prompt.
arXiv Detail & Related papers (2023-05-15T17:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.