Neuro-Symbolic Bi-Directional Translation -- Deep Learning
Explainability for Climate Tipping Point Research
- URL: http://arxiv.org/abs/2306.11161v1
- Date: Mon, 19 Jun 2023 21:06:18 GMT
- Title: Neuro-Symbolic Bi-Directional Translation -- Deep Learning
Explainability for Climate Tipping Point Research
- Authors: Chace Ashcraft, Jennifer Sleeman, Caroline Tang, Jay Brett, Anand
Gnanadesikan
- Abstract summary: We propose a neuro-symbolic approach called Neuro-Symbolic Question-Answer Program Translator, or NS-QAPT, to address explainability and interpretability for deep learning climate simulation.
The NS-QAPT method includes a bidirectional encoder-decoder architecture that translates between domain-specific questions and executable programs used to direct the climate simulation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been an increase in using deep learning for
climate and weather modeling. Though results have been impressive,
explainability and interpretability of deep learning models are still a
challenge. A third wave of Artificial Intelligence (AI), which includes logic
and reasoning, has been described as a way to address these issues.
Neuro-symbolic AI is a key component of this integration of logic and reasoning
with deep learning. In this work we propose a neuro-symbolic approach called
Neuro-Symbolic Question-Answer Program Translator, or NS-QAPT, to address
explainability and interpretability for deep learning climate simulation,
applied to climate tipping point discovery. The NS-QAPT method includes a
bidirectional encoder-decoder architecture that translates between
domain-specific questions and executable programs used to direct the climate
simulation, acting as a bridge between climate scientists and deep learning
models. We show early compelling results of this translation method and
introduce a domain-specific language and associated executable programs for a
commonly known tipping point, the collapse of the Atlantic Meridional
Overturning Circulation (AMOC).
Related papers
- Reasoning in Neurosymbolic AI [2.25467522343563]
principled integration of reasoning and learning in neural networks is a main objective of the area of neurosymbolic Artificial Intelligence.<n>A simple energy-based neurosymbolic AI system is described that can represent and reason formally about any propositional logic formula.
arXiv Detail & Related papers (2025-05-22T11:57:04Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Characterizing climate pathways using feature importance on echo state
networks [0.0]
echo state network (ESN) is a computationally efficient neural network variation designed for temporal data.
ESNs are non-interpretable black-box models, which poses a hurdle for understanding variable relationships.
We conduct a simulation study to assess and compare the feature importance techniques, and we demonstrate the approach on reanalysis climate data.
arXiv Detail & Related papers (2023-10-12T16:55:04Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Using Artificial Intelligence to aid Scientific Discovery of Climate
Tipping Points [1.521140899164062]
We propose a hybrid Artificial Intelligence (AI) climate modeling approach that enables climate modelers in scientific discovery.
We describe how this methodology can be applied to the discovery of climate tipping points and, in particular, the collapse of the Atlantic Meridional Overturning Circulation (AMOC)
We show preliminary results of neuro-symbolic method performance when translating between natural language questions and symbolically learned representations.
arXiv Detail & Related papers (2023-02-14T06:00:39Z) - Climate Intervention Analysis using AI Model Guided by Statistical
Physics Principles [6.824166358727082]
We propose a novel solution by utilizing a principle from statistical physics known as the Fluctuation-Dissipation Theorem (FDT)
By leveraging, we are able to extract information encoded in a large dataset produced by Earth System Models.
Our model, AiBEDO, is capable of capturing the complex, multi-timescale effects of radiation perturbations on global and regional surface climate.
arXiv Detail & Related papers (2023-02-07T05:09:10Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Investigating the fidelity of explainable artificial intelligence
methods for applications of convolutional neural networks in geoscience [0.02578242050187029]
Methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain CNN decision-making strategy.
Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications.
arXiv Detail & Related papers (2022-02-07T18:47:15Z) - On-board Volcanic Eruption Detection through CNNs and Satellite
Multispectral Imagery [59.442493247857755]
Authors propose a first prototype and a study of feasibility for an AI model to be 'loaded' on board.
As a case study, the authors decided to investigate the detection of volcanic eruptions as a method to swiftly produce alerts.
Two Convolutional Neural Networks have been proposed and created, also showing how to correctly implement them on real hardware.
arXiv Detail & Related papers (2021-06-29T11:52:43Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.