LEURN: Learning Explainable Univariate Rules with Neural Networks
- URL: http://arxiv.org/abs/2303.14937v1
- Date: Mon, 27 Mar 2023 06:34:42 GMT
- Title: LEURN: Learning Explainable Univariate Rules with Neural Networks
- Authors: Caglar Aytekin
- Abstract summary: LEURN is a neural network architecture that learns univariate decision rules.
LEURN achieves comparable performance to state-of-the-art methods across 30 datasets for classification and regression problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose LEURN: a neural network architecture that learns
univariate decision rules. LEURN is a white-box algorithm that results into
univariate trees and makes explainable decisions in every stage. In each layer,
LEURN finds a set of univariate rules based on an embedding of the previously
checked rules and their corresponding responses. Both rule finding and final
decision mechanisms are weighted linear combinations of these embeddings, hence
contribution of all rules are clearly formulated and explainable. LEURN can
select features, extract feature importance, provide semantic similarity
between a pair of samples, be used in a generative manner and can give a
confidence score. Thanks to a smoothness parameter, LEURN can also controllably
behave like decision trees or vanilla neural networks. Besides these
advantages, LEURN achieves comparable performance to state-of-the-art methods
across 30 tabular datasets for classification and regression problems.
Related papers
- Unveiling Options with Neural Decomposition [11.975013522386538]
In reinforcement learning, agents often learn policies for specific tasks without the ability to generalize this knowledge to related tasks.
This paper introduces an algorithm that attempts to address this limitation by decomposing neural networks encoding policies for Markov Decision Processes into reusable sub-policies.
We turn each of these sub-policies into options by wrapping them with while-loops of varied number of iterations.
arXiv Detail & Related papers (2024-10-15T04:36:44Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Neural-based classification rule learning for sequential data [0.0]
We propose a novel differentiable fully interpretable method to discover both local and global patterns for rule-based binary classification.
It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity.
We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset.
arXiv Detail & Related papers (2023-02-22T11:05:05Z) - Towards Rigorous Understanding of Neural Networks via
Semantics-preserving Transformations [0.0]
We present an approach to the precise and global verification and explanation of Rectifier Neural Networks.
Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures.
arXiv Detail & Related papers (2023-01-19T11:35:07Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - UNFIS: A Novel Neuro-Fuzzy Inference System with Unstructured Fuzzy
Rules for Classification [1.0660480034605238]
This paper presents a neuro-fuzzy inference system for classification applications.
It can select different sets of input variables for constructing each fuzzy rule.
It has better or very close performance with a parsimonious structure consisting of unstructured fuzzy.
arXiv Detail & Related papers (2022-10-28T17:51:50Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Learning Accurate and Interpretable Decision Rule Sets from Neural
Networks [5.280792199222362]
This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification.
We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layer architecture.
arXiv Detail & Related papers (2021-03-04T04:10:19Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.