Extracting human interpretable structure-property relationships in
chemistry using XAI and large language models
- URL: http://arxiv.org/abs/2311.04047v1
- Date: Tue, 7 Nov 2023 15:02:32 GMT
- Title: Extracting human interpretable structure-property relationships in
chemistry using XAI and large language models
- Authors: Geemi P. Wellawatte and Philippe Schwaller
- Abstract summary: We propose the XpertAI framework that integrates XAI methods with large language models (LLMs) accessing scientific literature to generate natural language explanations of raw chemical data automatically.
Our results show that XpertAI combines the strengths of LLMs and XAI tools in generating specific, scientific, and interpretable explanations.
- Score: 0.4769602527256662
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Explainable Artificial Intelligence (XAI) is an emerging field in AI that
aims to address the opaque nature of machine learning models. Furthermore, it
has been shown that XAI can be used to extract input-output relationships,
making them a useful tool in chemistry to understand structure-property
relationships. However, one of the main limitations of XAI methods is that they
are developed for technically oriented users. We propose the XpertAI framework
that integrates XAI methods with large language models (LLMs) accessing
scientific literature to generate accessible natural language explanations of
raw chemical data automatically. We conducted 5 case studies to evaluate the
performance of XpertAI. Our results show that XpertAI combines the strengths of
LLMs and XAI tools in generating specific, scientific, and interpretable
explanations.
Related papers
- LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development [7.196813936746303]
We propose the early adoption of Explainable AI (XAI) with a focus on three properties: Quality of explanation, the explanation summaries should be consistent across multiple XAI methods.
We present XAIport, a framework of XAI encapsulated into Open APIs to deliver early explanations as observation for learning model quality assurance.
arXiv Detail & Related papers (2024-03-25T15:22:06Z) - Towards a general framework for improving the performance of classifiers using XAI methods [0.0]
This paper proposes a framework for automatically improving the performance of pre-trained Deep Learning (DL) classifiers using XAI methods.
We will call auto-encoder-based and encoder-decoder-based, and discuss their key aspects.
arXiv Detail & Related papers (2024-03-15T15:04:20Z) - Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era [77.174117675196]
XAI is being extended towards Large Language Models (LLMs)
This paper analyzes how XAI can benefit LLMs and AI systems.
We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges.
arXiv Detail & Related papers (2024-03-13T20:25:27Z) - XAI for All: Can Large Language Models Simplify Explainable AI? [0.0699049312989311]
"x-[plAIn]" is a new approach to make XAI more accessible to a wider audience through a custom Large Language Model.
Our goal was to design a model that can generate clear, concise summaries of various XAI methods.
Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations.
arXiv Detail & Related papers (2024-01-23T21:47:12Z) - Agent-based Learning of Materials Datasets from Scientific Literature [0.0]
We develop a chemist AI agent, powered by large language models (LLMs), to create structured datasets from natural language text.
Our chemist AI agent, Eunomia, can plan and execute actions by leveraging the existing knowledge from decades of scientific research articles.
arXiv Detail & Related papers (2023-12-18T20:29:58Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - A Comparative Approach to Explainable Artificial Intelligence Methods in
Application to High-Dimensional Electronic Health Records: Examining the
Usability of XAI [0.0]
XAI aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means.
The ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum.
XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level.
arXiv Detail & Related papers (2021-03-08T18:15:52Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.