Neural-based classification rule learning for sequential data
- URL: http://arxiv.org/abs/2302.11286v1
- Date: Wed, 22 Feb 2023 11:05:05 GMT
- Title: Neural-based classification rule learning for sequential data
- Authors: Marine Collery, Philippe Bonnard, Fran\c{c}ois Fages and Remy Kusters
- Abstract summary: We propose a novel differentiable fully interpretable method to discover both local and global patterns for rule-based binary classification.
It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity.
We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discovering interpretable patterns for classification of sequential data is
of key importance for a variety of fields, ranging from genomics to fraud
detection or more generally interpretable decision-making. In this paper, we
propose a novel differentiable fully interpretable method to discover both
local and global patterns (i.e. catching a relative or absolute temporal
dependency) for rule-based binary classification. It consists of a
convolutional binary neural network with an interpretable neural filter and a
training strategy based on dynamically-enforced sparsity. We demonstrate the
validity and usefulness of the approach on synthetic datasets and on an
open-source peptides dataset. Key to this end-to-end differentiable method is
that the expressive patterns used in the rules are learned alongside the rules
themselves.
Related papers
- Neural Lineage [56.34149480207817]
We introduce a novel task known as neural lineage detection, aiming at discovering lineage relationships between parent and child models.
For practical convenience, we introduce a learning-free approach, which integrates an approximation of the finetuning process into the neural network representation similarity metrics.
For the pursuit of accuracy, we introduce a learning-based lineage detector comprising encoders and a transformer detector.
arXiv Detail & Related papers (2024-06-17T01:11:53Z) - Finding Interpretable Class-Specific Patterns through Efficient Neural
Search [43.454121220860564]
We propose a novel, inherently interpretable binary neural network architecture DNAPS that extracts differential patterns from data.
DiffNaps is scalable to hundreds of thousands of features and robust to noise.
We show on synthetic and real world data, including three biological applications, that, unlike its competitors, DiffNaps consistently yields accurate, succinct, and interpretable class descriptions.
arXiv Detail & Related papers (2023-12-07T14:09:18Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Learning Accurate and Interpretable Decision Rule Sets from Neural
Networks [5.280792199222362]
This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification.
We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layer architecture.
arXiv Detail & Related papers (2021-03-04T04:10:19Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Towards Improved and Interpretable Deep Metric Learning via Attentive
Grouping [103.71992720794421]
Grouping has been commonly used in deep metric learning for computing diverse features.
We propose an improved and interpretable grouping method to be integrated flexibly with any metric learning framework.
arXiv Detail & Related papers (2020-11-17T19:08:24Z) - Identifying Learning Rules From Neural Network Observables [26.96375335939315]
We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes.
Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities, may provide a good basis on which to identify learning rules.
arXiv Detail & Related papers (2020-10-22T14:36:54Z) - SOAR: Simultaneous Or of And Rules for Classification of Positive &
Negative Classes [0.0]
We present a novel and complete taxonomy of classifications that clearly capture and quantify the inherent ambiguity in noisy binary classifications in the real world.
We show that this approach leads to a more granular formulation of the likelihood model and a simulated-annealing based optimization achieves classification performance competitive with comparable techniques.
arXiv Detail & Related papers (2020-08-25T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.