A Voting Approach for Explainable Classification with Rule Learning
- URL: http://arxiv.org/abs/2311.07323v2
- Date: Fri, 8 Mar 2024 10:09:43 GMT
- Title: A Voting Approach for Explainable Classification with Rule Learning
- Authors: Albert N\"ossig, Tobias Hell, Georg Moser
- Abstract summary: We introduce a voting approach combining both worlds, aiming to achieve comparable results as (unexplainable) state-of-the-art methods.
We prove that our approach not only clearly outperforms ordinary rule learning methods, but also yields results on a par with state-of-the-art outcomes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art results in typical classification tasks are mostly achieved
by unexplainable machine learning methods, like deep neural networks, for
instance. Contrarily, in this paper, we investigate the application of rule
learning methods in such a context. Thus, classifications become based on
comprehensible (first-order) rules, explaining the predictions made. In
general, however, rule-based classifications are less accurate than
state-of-the-art results (often significantly). As main contribution, we
introduce a voting approach combining both worlds, aiming to achieve comparable
results as (unexplainable) state-of-the-art methods, while still providing
explanations in the form of deterministic rules. Considering a variety of
benchmark data sets including a use case of significant interest to insurance
industries, we prove that our approach not only clearly outperforms ordinary
rule learning methods, but also yields results on a par with state-of-the-art
outcomes.
Related papers
- Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - On the Aggregation of Rules for Knowledge Graph Completion [9.628032156001069]
Rule learning approaches for knowledge graph completion are efficient, interpretable and competitive to purely neural models.
We show that existing aggregation approaches can be expressed as marginal inference operations over the predicting rules.
We propose an efficient and overlooked baseline which combines the previous strategies and is competitive to computationally more expensive approaches.
arXiv Detail & Related papers (2023-09-01T07:32:11Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Efficient learning of large sets of locally optimal classification rules [0.0]
Conventional rule learning algorithms aim at finding a set of simple rules, where each rule covers as many examples as possible.
In this paper, we argue that the rules found in this way may not be the optimal explanations for each of the examples they cover.
We propose an efficient algorithm that aims at finding the best rule covering each training example in a greedy optimization consisting of one specialization and one generalization loop.
arXiv Detail & Related papers (2023-01-24T11:40:28Z) - Bayes Point Rule Set Learning [5.065947993017157]
Interpretability is having an increasingly important role in the design of machine learning algorithms.
Disjunctive Normal Forms are arguably the most interpretable way to express a set of rules.
We propose an effective bottom-up extension of the popular FIND-S algorithm to learn DNF-type rulesets.
arXiv Detail & Related papers (2022-04-11T16:50:41Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Visualization of Supervised and Self-Supervised Neural Networks via
Attribution Guided Factorization [87.96102461221415]
We develop an algorithm that provides per-class explainability.
In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization.
arXiv Detail & Related papers (2020-12-03T18:48:39Z) - Learning explanations that are hard to vary [75.30552491694066]
We show that averaging across examples can favor memorization and patchwork' solutions that sew together different strategies.
We then propose and experimentally validate a simple alternative algorithm based on a logical AND.
arXiv Detail & Related papers (2020-09-01T10:17:48Z) - SOAR: Simultaneous Or of And Rules for Classification of Positive &
Negative Classes [0.0]
We present a novel and complete taxonomy of classifications that clearly capture and quantify the inherent ambiguity in noisy binary classifications in the real world.
We show that this approach leads to a more granular formulation of the likelihood model and a simulated-annealing based optimization achieves classification performance competitive with comparable techniques.
arXiv Detail & Related papers (2020-08-25T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.