FOLD-SE: Scalable Explainable AI
- URL: http://arxiv.org/abs/2208.07912v1
- Date: Tue, 16 Aug 2022 19:15:11 GMT
- Title: FOLD-SE: Scalable Explainable AI
- Authors: Huaduo Wang and Gopal Gupta
- Abstract summary: We present an improvement over the FOLD-R++ algorithm, termed FOLD-SE, that provides scalable explainability (SE)
The number of learned rules and learned literals stay small and, hence, understandable by human beings, while maintaining good performance in classification.
- Score: 3.1981440103815717
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: FOLD-R++ is a highly efficient and explainable rule-based machine learning
algorithm for binary classification tasks. It generates a stratified normal
logic program as an (explainable) trained model. We present an improvement over
the FOLD-R++ algorithm, termed FOLD-SE, that provides scalable explainability
(SE) while inheriting all the merits of FOLD-R++. Scalable explainability means
that regardless of the size of the dataset, the number of learned rules and
learned literals stay small and, hence, understandable by human beings, while
maintaining good performance in classification. FOLD-SE is competitive in
performance with state-of-the-art algorithms such as XGBoost and Multi-Layer
Perceptrons (MLP). However, unlike XGBoost and MLP, the FOLD-SE algorithm
generates a model with scalable explainability. The FOLD-SE algorithm
outperforms FOLD-R++ and RIPPER algorithms in efficiency, performance, and
explainability, especially for large datasets. The FOLD-RM algorithm is an
extension of FOLD-R++ for multi-class classification tasks. An improved FOLD-RM
algorithm built upon FOLD-SE is also presented.
Related papers
- CON-FOLD -- Explainable Machine Learning with Confidence [0.18416014644193066]
FOLD-RM is an explainable machine learning classification algorithm.
We introduce CON-FOLD which extends FOLD-RM in several ways.
We present a confidence-based pruning algorithm that uses the unique structure of FOLD-RM rules to efficiently prune rules and prevent overfitting.
arXiv Detail & Related papers (2024-08-14T23:45:21Z) - The Power of Resets in Online Reinforcement Learning [73.64852266145387]
We explore the power of simulators through online reinforcement learning with local simulator access (or, local planning)
We show that MDPs with low coverability can be learned in a sample-efficient fashion with only $Qstar$-realizability.
We show that the notorious Exogenous Block MDP problem is tractable under local simulator access.
arXiv Detail & Related papers (2024-04-23T18:09:53Z) - Efficient GNN Explanation via Learning Removal-based Attribution [56.18049062940675]
We propose a framework of GNN explanation named LeArn Removal-based Attribution (LARA) to address this problem.
The explainer in LARA learns to generate removal-based attribution which enables providing explanations with high fidelity.
In particular, LARA is 3.5 times faster and achieves higher fidelity than the state-of-the-art method on the large dataset ogbn-arxiv.
arXiv Detail & Related papers (2023-06-09T08:54:20Z) - Logical Entity Representation in Knowledge-Graphs for Differentiable
Rule Learning [71.05093203007357]
We propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph.
A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.
Our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods.
arXiv Detail & Related papers (2023-05-22T05:59:22Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - FOLD-TR: A Scalable and Efficient Inductive Learning Algorithm for
Learning To Rank [3.1981440103815717]
FOLD-R++ is a new inductive learning algorithm for binary classification tasks.
We present a customized FOLD-R++ algorithm with the ranking framework, called FOLD-TR.
arXiv Detail & Related papers (2022-06-15T04:46:49Z) - FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for
Multi-Category Classification of Mixed Data [3.1981440103815717]
FOLD-RM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data.
It generates an (explainable) answer set programming (ASP) rule set for multi-category classification tasks.
arXiv Detail & Related papers (2022-02-14T18:07:54Z) - FOLD-R++: A Toolset for Automated Inductive Learning of Default Theories
from Mixed Data [2.741266294612776]
FOLD-R is an automated inductive learning algorithm for learning default rules with exceptions for mixed (numerical and categorical) data.
We present an improved FOLD-R algorithm, called FOLD-R++, that significantly increases the efficiency and scalability of FOLD-R.
arXiv Detail & Related papers (2021-10-15T03:55:13Z) - A Clustering and Demotion Based Algorithm for Inductive Learning of
Default Theories [4.640835690336653]
We present a clustering- and demotion-based algorithm called Kmeans-FOLD to induce nonmonotonic logic programs from positive and negative examples.
Our algorithm generates a more concise logic program compared to the FOLD algorithm.
Experiments on the UCI dataset show that a combination of K-Means clustering and our demotion strategy produces significant improvement for datasets with more than one cluster of positive examples.
arXiv Detail & Related papers (2021-09-26T14:50:18Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.