ChatMOF: An Autonomous AI System for Predicting and Generating
Metal-Organic Frameworks
- URL: http://arxiv.org/abs/2308.01423v2
- Date: Fri, 25 Aug 2023 15:13:46 GMT
- Title: ChatMOF: An Autonomous AI System for Predicting and Generating
Metal-Organic Frameworks
- Authors: Yeonghun Kang, Jihan Kim
- Abstract summary: ChatMOF is an autonomous Artificial Intelligence (AI) system built to predict and generate metal-organic frameworks (MOFs)
By leveraging a large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key details from textual inputs and delivers appropriate responses.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: ChatMOF is an autonomous Artificial Intelligence (AI) system that is built to
predict and generate metal-organic frameworks (MOFs). By leveraging a
large-scale language model (GPT-4 and GPT-3.5-turbo), ChatMOF extracts key
details from textual inputs and delivers appropriate responses, thus
eliminating the necessity for rigid structured queries. The system is comprised
of three core components (i.e. an agent, a toolkit, and an evaluator) and it
forms a robust pipeline that manages a variety of tasks, including data
retrieval, property prediction, and structure generations. The study further
explores the merits and constraints of using large language models (LLMs) AI
system in material sciences using and showcases its transformative potential
for future advancements.
Related papers
- Generative AI Systems: A Systems-based Perspective on Generative AI [12.400966570867322]
Large Language Models (LLMs) have revolutionized AI systems by enabling communication with machines using natural language.
Recent developments in Generative AI (GenAI) have shown great promise in using LLMs as multimodal systems.
This paper aims to explore and state new research directions in Generative AI Systems.
arXiv Detail & Related papers (2024-06-25T12:51:47Z) - Towards Next-Generation Urban Decision Support Systems through AI-Powered Construction of Scientific Ontology using Large Language Models -- A Case in Optimizing Intermodal Freight Transportation [1.6230958216521798]
This study investigates the potential of leveraging the pre-trained Large Language Models (LLMs)
By adopting ChatGPT API as the reasoning core, we outline an integrated workflow that encompasses natural language processing, methontology-based prompt tuning, and transformers.
The outcomes of our methodology are knowledge graphs in widely adopted ontology languages (e.g., OWL, RDF, SPARQL)
arXiv Detail & Related papers (2024-05-29T16:40:31Z) - Octopus v3: Technical Report for On-device Sub-billion Multimodal AI Agent [10.998608318944985]
A multimodal AI agent is characterized by its ability to process and learn from various types of data.
We introduce a multimodal model that incorporates the concept of functional token specifically designed for AI agent applications.
We demonstrate that this model is capable of operating efficiently on a wide range of edge devices, including as constrained as a Raspberry Pi.
arXiv Detail & Related papers (2024-04-17T15:07:06Z) - LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models [50.259006481656094]
We present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models.
Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer.
We present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.
arXiv Detail & Related papers (2024-04-03T23:57:34Z) - An Interactive Agent Foundation Model [49.77861810045509]
We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents.
Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction.
We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare.
arXiv Detail & Related papers (2024-02-08T18:58:02Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z) - Large Language Models for Information Retrieval: A Survey [58.30439850203101]
Information retrieval has evolved from term-based methods to its integration with advanced neural models.
Recent research has sought to leverage large language models (LLMs) to improve IR systems.
We delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers.
arXiv Detail & Related papers (2023-08-14T12:47:22Z) - AutoML-GPT: Automatic Machine Learning with GPT [74.30699827690596]
We propose developing task-oriented prompts and automatically utilizing large language models (LLMs) to automate the training pipeline.
We present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyper parameters.
This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas.
arXiv Detail & Related papers (2023-05-04T02:09:43Z) - From Natural Language to Simulations: Applying GPT-3 Codex to Automate
Simulation Modeling of Logistics Systems [0.0]
This work is the first attempt to apply Natural Language Processing to automate the development of simulation models of systems vitally important for logistics.
We demonstrated that the framework built on top of the fine-tuned GPT-3 Codex, a Transformer-based language model, could produce functionally valid simulations of queuing and inventory control systems given the verbal description.
arXiv Detail & Related papers (2022-02-24T14:01:50Z) - Speech Emotion Recognition using Self-Supervised Features [14.954994969217998]
We introduce a modular End-to- End (E2E) SER system based on an Upstream + Downstream architecture paradigm.
Several SER experiments for predicting categorical emotion classes from the IEMOCAP dataset are performed.
The proposed monomodal speechonly based system achieves SOTA results, but also brings light to the possibility of powerful and well finetuned self-supervised acoustic features.
arXiv Detail & Related papers (2022-02-07T00:50:07Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.