A Clustering and Demotion Based Algorithm for Inductive Learning of
Default Theories
- URL: http://arxiv.org/abs/2109.12624v1
- Date: Sun, 26 Sep 2021 14:50:18 GMT
- Title: A Clustering and Demotion Based Algorithm for Inductive Learning of
Default Theories
- Authors: Huaduo Wang, Farhad Shakerin, Gopal Gupta
- Abstract summary: We present a clustering- and demotion-based algorithm called Kmeans-FOLD to induce nonmonotonic logic programs from positive and negative examples.
Our algorithm generates a more concise logic program compared to the FOLD algorithm.
Experiments on the UCI dataset show that a combination of K-Means clustering and our demotion strategy produces significant improvement for datasets with more than one cluster of positive examples.
- Score: 4.640835690336653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a clustering- and demotion-based algorithm called Kmeans-FOLD to
induce nonmonotonic logic programs from positive and negative examples. Our
algorithm improves upon-and is inspired by-the FOLD algorithm. The FOLD
algorithm itself is an improvement over the FOIL algorithm. Our algorithm
generates a more concise logic program compared to the FOLD algorithm. Our
algorithm uses the K-means based clustering method to cluster the input
positive samples before applying the FOLD algorithm. Positive examples that are
covered by the partially learned program in intermediate steps are not
discarded as in the FOLD algorithm, rather they are demoted, i.e., their
weights are reduced in subsequent iterations of the algorithm. Our experiments
on the UCI dataset show that a combination of K-Means clustering and our
demotion strategy produces significant improvement for datasets with more than
one cluster of positive examples. The resulting induced program is also more
concise and therefore easier to understand compared to the FOLD and ALEPH
systems, two state of the art inductive logic programming (ILP) systems.
Related papers
- Optimizing NOTEARS Objectives via Topological Swaps [41.18829644248979]
In this paper, we develop an effective method for a set of candidate algorithms.
At the inner level, given a subject, we utilize off-the-the-art constraints.
The proposed method significantly improves the score of other algorithms.
arXiv Detail & Related papers (2023-05-26T21:49:37Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection [75.84584400866254]
We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
arXiv Detail & Related papers (2022-10-31T14:06:11Z) - Practical, Provably-Correct Interactive Learning in the Realizable
Setting: The Power of True Believers [12.09273192079783]
We consider interactive learning in the realizable setting and develop a general framework to handle problems ranging from best arm identification to active classification.
We design novel computationally efficient algorithms for the realizable setting that match the minimax lower bound up to logarithmic factors.
arXiv Detail & Related papers (2021-11-09T02:33:36Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - White-box Induction From SVM Models: Explainable AI with Logic
Programming [6.396288020763143]
We focus on inducing logic programs that explain models learned by the support vector machine (SVM) algorithm.
We use SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features.
This approach yields an algorithm that captures SVM model's underlying logic and outperforms %GG: the FOIL algorithm.
arXiv Detail & Related papers (2020-08-09T23:07:14Z) - DeepMP for Non-Negative Sparse Decomposition [14.790515227906257]
Non-negative signals form an important class of sparse signals.
greedy and convex relaxed algorithms are among the most popular methods.
One such modification has been proposed for Matching Pursuit (MP) based algorithms.
arXiv Detail & Related papers (2020-07-28T14:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.