A Field Guide to Scientific XAI: Transparent and Interpretable Deep
Learning for Bioinformatics Research
- URL: http://arxiv.org/abs/2110.08253v1
- Date: Wed, 13 Oct 2021 07:02:58 GMT
- Title: A Field Guide to Scientific XAI: Transparent and Interpretable Deep
Learning for Bioinformatics Research
- Authors: Thomas P Quinn, Sunil Gupta, Svetha Venkatesh, Vuong Le
- Abstract summary: This article is a field guide to transparent model design.
It provides a taxonomy of transparent model design concepts, a practical workflow for putting design concepts into practice, and a general template for reporting design choices.
- Score: 48.587021833307574
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has become popular because of its potential to achieve high
accuracy in prediction tasks. However, accuracy is not always the only goal of
statistical modelling, especially for models developed as part of scientific
research. Rather, many scientific models are developed to facilitate scientific
discovery, by which we mean to abstract a human-understandable representation
of the natural world. Unfortunately, the opacity of deep neural networks limit
their role in scientific discovery, creating a new demand for models that are
transparently interpretable. This article is a field guide to transparent model
design. It provides a taxonomy of transparent model design concepts, a
practical workflow for putting design concepts into practice, and a general
template for reporting design choices. We hope this field guide will help
researchers more effectively design transparently interpretable models, and
thus enable them to use deep learning for scientific discovery.
Related papers
- Predicting New Research Directions in Materials Science using Large Language Models and Concept Graphs [30.813288388998256]
We show that large language models (LLMs) can extract concepts more efficiently than automated keyword extraction methods.<n>A machine learning model is trained to predict emerging combinations of concepts, based on historical data.<n>We show that the model can inspire materials scientists in their creative thinking process by predicting innovative combinations of topics that have not yet been investigated.
arXiv Detail & Related papers (2025-06-20T08:26:12Z) - Scientifically-Interpretable Reasoning Network (ScIReN): Uncovering the Black-Box of Nature [19.959751739424785]
We propose a fully-transparent framework that combines interpretable neural and process-based reasoning.<n>An interpretable encoder predicts scientifically-meaningful latent parameters, which are then passed through a differentiable process-based decoder.<n>ScIReN outperforms black-box networks in predictive accuracy while providing substantial scientific interpretability.
arXiv Detail & Related papers (2025-06-16T23:21:37Z) - SciMantify -- A Hybrid Approach for the Evolving Semantification of Scientific Knowledge [0.4499833362998487]
We propose an evolution model of knowledge representation, inspired by the 5-star Linked Open Data (LOD) model.<n>We develop a hybrid approach, called SciMantify, to support its evolving semantification.<n>We implement the approach in the Open Research Knowledge Graph (ORKG), an established platform for improving the findability, accessibility, interoperability, and reusability of scientific knowledge.
arXiv Detail & Related papers (2025-04-14T07:57:55Z) - Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Knowledge AI: Fine-tuning NLP Models for Facilitating Scientific Knowledge Extraction and Understanding [0.0]
This project investigates the efficacy of Large Language Models (LLMs) in understanding and extracting scientific knowledge across specific domains.
We employ pre-trained models and fine-tune them on datasets in the scientific domain.
arXiv Detail & Related papers (2024-08-04T01:32:09Z) - Learn From Model Beyond Fine-Tuning: A Survey [78.80920533793595]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Designing Novel Cognitive Diagnosis Models via Evolutionary
Multi-Objective Neural Architecture Search [13.9289351255891]
We propose to automatically design novel cognitive diagnosis models by evolutionary multi-objective neural architecture search (NAS)
Experiments on two real-world datasets demonstrate that the cognitive diagnosis models searched by the proposed approach exhibit significantly better performance than existing models and also hold as good interpretability as human-designed models.
arXiv Detail & Related papers (2023-07-10T09:09:26Z) - Towards Interpretable Deep Reinforcement Learning Models via Inverse
Reinforcement Learning [27.841725567976315]
We propose a novel framework utilizing Adversarial Inverse Reinforcement Learning.
This framework provides global explanations for decisions made by a Reinforcement Learning model.
We capture intuitive tendencies that the model follows by summarizing the model's decision-making process.
arXiv Detail & Related papers (2022-03-30T17:01:59Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Efficient Deep Learning: A Survey on Making Deep Learning Models
Smaller, Faster, and Better [0.0]
With the progressive improvements in deep learning models, their number of parameters, latency, resources required to train, etc. have increased significantly.
We present and motivate the problem of efficiency in deep learning, followed by a thorough survey of the five core areas of model efficiency.
We believe this is the first comprehensive survey in the efficient deep learning space that covers the landscape of model efficiency from modeling techniques to hardware support.
arXiv Detail & Related papers (2021-06-16T17:31:38Z) - Modeling the EdNet Dataset with Logistic Regression [0.0]
We describe our experience with competition from the perspective of educational data mining.
We discuss some basic results in the Kaggle system and our thoughts on how those results may have been improved.
arXiv Detail & Related papers (2021-05-17T20:30:36Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.