White-box Induction From SVM Models: Explainable AI with Logic
Programming
- URL: http://arxiv.org/abs/2008.03301v1
- Date: Sun, 9 Aug 2020 23:07:14 GMT
- Title: White-box Induction From SVM Models: Explainable AI with Logic
Programming
- Authors: Farhad Shakerin, Gopal Gupta
- Abstract summary: We focus on inducing logic programs that explain models learned by the support vector machine (SVM) algorithm.
We use SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features.
This approach yields an algorithm that captures SVM model's underlying logic and outperforms %GG: the FOIL algorithm.
- Score: 6.396288020763143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We focus on the problem of inducing logic programs that explain models
learned by the support vector machine (SVM) algorithm. The top-down sequential
covering inductive logic programming (ILP) algorithms (e.g., FOIL) apply
hill-climbing search using heuristics from information theory. A major issue
with this class of algorithms is getting stuck in a local optimum. In our new
approach, however, the data-dependent hill-climbing search is replaced with a
model-dependent search where a globally optimal SVM model is trained first,
then the algorithm looks into support vectors as the most influential data
points in the model, and induces a clause that would cover the support vector
and points that are most similar to that support vector. Instead of defining a
fixed hypothesis search space, our algorithm makes use of SHAP, an
example-specific interpreter in explainable AI, to determine a relevant set of
features. This approach yields an algorithm that captures SVM model's
underlying logic and outperforms %GG: the FOIL algorithm --> other ILP
algorithms other ILP algorithms in terms of the number of induced clauses and
classification evaluation metrics. This paper is under consideration for
publication in the journal of "Theory and practice of logic programming".
Related papers
- Large Language Model-Enhanced Algorithm Selection: Towards Comprehensive Algorithm Representation [27.378185644892984]
This paper introduces Large Language Models (LLMs) into algorithm selection for the first time.
LLMs not only captures the structural and semantic aspects of the algorithm, but also demonstrates contextual awareness and library function understanding.
The selected algorithm is determined by the matching degree between a given problem and different algorithms.
arXiv Detail & Related papers (2023-11-22T06:23:18Z) - Online Regenerative Learning [0.0]
We study a type of Online Linear Programming (OLP) problem that maximizes the objective function with inputs.
The performance of various algorithms that analyze this type of OLP is well studied when the inputs follow some i.i.d distribution.
arXiv Detail & Related papers (2022-09-18T21:04:56Z) - The CLRS Algorithmic Reasoning Benchmark [28.789225199559834]
Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.
We propose the CLRS Algorithmic Reasoning Benchmark, covering classical algorithms from the Introduction to Algorithms textbook.
Our benchmark spans a variety of algorithmic reasoning procedures, including sorting, searching, dynamic programming, graph algorithms, string algorithms and geometric algorithms.
arXiv Detail & Related papers (2022-05-31T09:56:44Z) - A Clustering and Demotion Based Algorithm for Inductive Learning of
Default Theories [4.640835690336653]
We present a clustering- and demotion-based algorithm called Kmeans-FOLD to induce nonmonotonic logic programs from positive and negative examples.
Our algorithm generates a more concise logic program compared to the FOLD algorithm.
Experiments on the UCI dataset show that a combination of K-Means clustering and our demotion strategy produces significant improvement for datasets with more than one cluster of positive examples.
arXiv Detail & Related papers (2021-09-26T14:50:18Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Performance Analysis of Meta-heuristic Algorithms for a Quadratic
Assignment Problem [6.555180412600522]
A quadratic assignment problem (QAP) is an optimization problem that belongs to the class of NP-hard ones.
Heuristics and meta-heuristics algorithm are prevalent solution methods for this problem.
This paper is one of comparative studies to apply different metaheuristic algorithms for solving the QAP.
arXiv Detail & Related papers (2020-07-29T15:02:07Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.