FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for
Multi-Category Classification of Mixed Data
- URL: http://arxiv.org/abs/2202.06913v1
- Date: Mon, 14 Feb 2022 18:07:54 GMT
- Title: FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for
Multi-Category Classification of Mixed Data
- Authors: Huaduo Wang and Gopal Gupta
- Abstract summary: FOLD-RM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data.
It generates an (explainable) answer set programming (ASP) rule set for multi-category classification tasks.
- Score: 3.1981440103815717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: FOLD-RM is an automated inductive learning algorithm for learning default
rules for mixed (numerical and categorical) data. It generates an (explainable)
answer set programming (ASP) rule set for multi-category classification tasks
while maintaining efficiency and scalability. The FOLD-RM algorithm is
competitive in performance with the widely-used XGBoost algorithm, however,
unlike XGBoost, the FOLD-RM algorithm produces an explainable model. FOLD-RM
outperforms XGBoost on some datasets, particularly large ones. FOLD-RM also
provides human-friendly explanations for predictions.
Related papers
- DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs [70.91804882618243]
This paper proposes DSMoE, a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks.
We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge.
Experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches.
arXiv Detail & Related papers (2025-02-18T02:37:26Z) - From Point to probabilistic gradient boosting for claim frequency and severity prediction [1.3812010983144802]
We present a unified notation, and contrast, all the existing point and probabilistic gradient boosting for decision tree algorithms.
We compare their performance on five publicly available datasets for claim frequency and severity, of various size and comprising different number of (high cardinality) categorical variables.
arXiv Detail & Related papers (2024-12-19T14:50:10Z) - How to Make LLMs Strong Node Classifiers? [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - GBM-based Bregman Proximal Algorithms for Constrained Learning [3.667453772837954]
We adapt GBM for constrained learning tasks within the framework of Bregman proximal algorithms.
We introduce a new Bregman method with a global optimality guarantee when the learning objective functions are convex.
We provide substantial experimental evidence to showcase the effectiveness of the Bregman algorithm framework.
arXiv Detail & Related papers (2023-08-21T14:56:51Z) - FOLD-SE: Scalable Explainable AI [3.1981440103815717]
We present an improvement over the FOLD-R++ algorithm, termed FOLD-SE, that provides scalable explainability (SE)
The number of learned rules and learned literals stay small and, hence, understandable by human beings, while maintaining good performance in classification.
arXiv Detail & Related papers (2022-08-16T19:15:11Z) - FOLD-TR: A Scalable and Efficient Inductive Learning Algorithm for
Learning To Rank [3.1981440103815717]
FOLD-R++ is a new inductive learning algorithm for binary classification tasks.
We present a customized FOLD-R++ algorithm with the ranking framework, called FOLD-TR.
arXiv Detail & Related papers (2022-06-15T04:46:49Z) - FOLD-R++: A Toolset for Automated Inductive Learning of Default Theories
from Mixed Data [2.741266294612776]
FOLD-R is an automated inductive learning algorithm for learning default rules with exceptions for mixed (numerical and categorical) data.
We present an improved FOLD-R algorithm, called FOLD-R++, that significantly increases the efficiency and scalability of FOLD-R.
arXiv Detail & Related papers (2021-10-15T03:55:13Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.