FOLD-R++: A Toolset for Automated Inductive Learning of Default Theories
from Mixed Data
- URL: http://arxiv.org/abs/2110.07843v1
- Date: Fri, 15 Oct 2021 03:55:13 GMT
- Title: FOLD-R++: A Toolset for Automated Inductive Learning of Default Theories
from Mixed Data
- Authors: Huaduo Wang and Gopal Gupta
- Abstract summary: FOLD-R is an automated inductive learning algorithm for learning default rules with exceptions for mixed (numerical and categorical) data.
We present an improved FOLD-R algorithm, called FOLD-R++, that significantly increases the efficiency and scalability of FOLD-R.
- Score: 2.741266294612776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: FOLD-R is an automated inductive learning algorithm for learning default
rules with exceptions for mixed (numerical and categorical) data. It generates
an (explainable) answer set programming (ASP) rule set for classification
tasks. We present an improved FOLD-R algorithm, called FOLD-R++, that
significantly increases the efficiency and scalability of FOLD-R. FOLD-R++
improves upon FOLD-R without compromising or losing information in the input
training data during the encoding or feature selection phase. The FOLD-R++
algorithm is competitive in performance with the widely-used XGBoost algorithm,
however, unlike XGBoost, the FOLD-R++ algorithm produces an explainable model.
Next, we create a powerful tool-set by combining FOLD-R++ with s(CASP)-a
goal-directed ASP execution engine-to make predictions on new data samples
using the answer set program generated by FOLD-R++. The s(CASP) system also
produces a justification for the prediction. Experiments presented in this
paper show that our improved FOLD-R++ algorithm is a significant improvement
over the original design and that the s(CASP) system can make predictions in an
efficient manner as well.
Related papers
- Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders [56.47577824219207]
In this paper, we unveil the hidden costs associated with intrusive fine-tuning techniques.
We introduce a new model reprogramming approach for fine-tuning, which we name Reprogrammer.
Our empirical evidence reveals that Reprogrammer is less intrusive and yields superior downstream models.
arXiv Detail & Related papers (2024-03-16T04:19:48Z) - The Wisdom of Hindsight Makes Language Models Better Instruction
Followers [84.9120606803906]
Reinforcement learning has seen wide success in finetuning large language models to better align with instructions via human feedback.
In this paper, we consider an alternative approach: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner.
We propose Hindsight Instruction Relabeling (HIR), a novel algorithm for aligning language models with instructions.
arXiv Detail & Related papers (2023-02-10T12:16:38Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - FOLD-SE: Scalable Explainable AI [3.1981440103815717]
We present an improvement over the FOLD-R++ algorithm, termed FOLD-SE, that provides scalable explainability (SE)
The number of learned rules and learned literals stay small and, hence, understandable by human beings, while maintaining good performance in classification.
arXiv Detail & Related papers (2022-08-16T19:15:11Z) - FOLD-TR: A Scalable and Efficient Inductive Learning Algorithm for
Learning To Rank [3.1981440103815717]
FOLD-R++ is a new inductive learning algorithm for binary classification tasks.
We present a customized FOLD-R++ algorithm with the ranking framework, called FOLD-TR.
arXiv Detail & Related papers (2022-06-15T04:46:49Z) - A Hybrid Framework for Sequential Data Prediction with End-to-End
Optimization [0.0]
We investigate nonlinear prediction in an online setting and introduce a hybrid model that effectively mitigates hand-designed features and manual model selection issues.
We employ a recurrent neural network (LSTM) for adaptive feature extraction from sequential data and a gradient boosting machinery (soft GBDT) for effective supervised regression.
We demonstrate the learning behavior of our algorithm on synthetic data and the significant performance improvements over the conventional methods over various real life datasets.
arXiv Detail & Related papers (2022-03-25T17:13:08Z) - FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for
Multi-Category Classification of Mixed Data [3.1981440103815717]
FOLD-RM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data.
It generates an (explainable) answer set programming (ASP) rule set for multi-category classification tasks.
arXiv Detail & Related papers (2022-02-14T18:07:54Z) - A Clustering and Demotion Based Algorithm for Inductive Learning of
Default Theories [4.640835690336653]
We present a clustering- and demotion-based algorithm called Kmeans-FOLD to induce nonmonotonic logic programs from positive and negative examples.
Our algorithm generates a more concise logic program compared to the FOLD algorithm.
Experiments on the UCI dataset show that a combination of K-Means clustering and our demotion strategy produces significant improvement for datasets with more than one cluster of positive examples.
arXiv Detail & Related papers (2021-09-26T14:50:18Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Discovering Reinforcement Learning Algorithms [53.72358280495428]
Reinforcement learning algorithms update an agent's parameters according to one of several possible rules.
This paper introduces a new meta-learning approach that discovers an entire update rule.
It includes both 'what to predict' (e.g. value functions) and 'how to learn from it' by interacting with a set of environments.
arXiv Detail & Related papers (2020-07-17T07:38:39Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.