FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for
Multi-Category Classification of Mixed Data
- URL: http://arxiv.org/abs/2202.06913v1
- Date: Mon, 14 Feb 2022 18:07:54 GMT
- Title: FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for
Multi-Category Classification of Mixed Data
- Authors: Huaduo Wang and Gopal Gupta
- Abstract summary: FOLD-RM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data.
It generates an (explainable) answer set programming (ASP) rule set for multi-category classification tasks.
- Score: 3.1981440103815717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: FOLD-RM is an automated inductive learning algorithm for learning default
rules for mixed (numerical and categorical) data. It generates an (explainable)
answer set programming (ASP) rule set for multi-category classification tasks
while maintaining efficiency and scalability. The FOLD-RM algorithm is
competitive in performance with the widely-used XGBoost algorithm, however,
unlike XGBoost, the FOLD-RM algorithm produces an explainable model. FOLD-RM
outperforms XGBoost on some datasets, particularly large ones. FOLD-RM also
provides human-friendly explanations for predictions.
Related papers
- Language Models are Graph Learners [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - GBM-based Bregman Proximal Algorithms for Constrained Learning [3.667453772837954]
We adapt GBM for constrained learning tasks within the framework of Bregman proximal algorithms.
We introduce a new Bregman method with a global optimality guarantee when the learning objective functions are convex.
We provide substantial experimental evidence to showcase the effectiveness of the Bregman algorithm framework.
arXiv Detail & Related papers (2023-08-21T14:56:51Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - FOLD-SE: Scalable Explainable AI [3.1981440103815717]
We present an improvement over the FOLD-R++ algorithm, termed FOLD-SE, that provides scalable explainability (SE)
The number of learned rules and learned literals stay small and, hence, understandable by human beings, while maintaining good performance in classification.
arXiv Detail & Related papers (2022-08-16T19:15:11Z) - FOLD-TR: A Scalable and Efficient Inductive Learning Algorithm for
Learning To Rank [3.1981440103815717]
FOLD-R++ is a new inductive learning algorithm for binary classification tasks.
We present a customized FOLD-R++ algorithm with the ranking framework, called FOLD-TR.
arXiv Detail & Related papers (2022-06-15T04:46:49Z) - FOLD-R++: A Toolset for Automated Inductive Learning of Default Theories
from Mixed Data [2.741266294612776]
FOLD-R is an automated inductive learning algorithm for learning default rules with exceptions for mixed (numerical and categorical) data.
We present an improved FOLD-R algorithm, called FOLD-R++, that significantly increases the efficiency and scalability of FOLD-R.
arXiv Detail & Related papers (2021-10-15T03:55:13Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - FairXGBoost: Fairness-aware Classification in XGBoost [0.0]
We propose a fair variant of XGBoost that enjoys all the advantages of XGBoost, while also matching the levels of fairness from bias-mitigation algorithms.
We provide an empirical analysis of our proposed method on standard benchmark datasets used in the fairness community.
arXiv Detail & Related papers (2020-09-03T04:08:23Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.