Fuzzy Logic Function as a Post-hoc Explanator of the Nonlinear
Classifier
- URL: http://arxiv.org/abs/2401.14417v1
- Date: Mon, 22 Jan 2024 13:58:03 GMT
- Title: Fuzzy Logic Function as a Post-hoc Explanator of the Nonlinear
Classifier
- Authors: Martin Klimo, Lubomir Kralik
- Abstract summary: Pattern recognition systems implemented using deep neural networks achieve better results than linear models.
However, their drawback is the black box property.
This property means that one with no experience utilising nonlinear systems may need help understanding the outcome of the decision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pattern recognition systems implemented using deep neural networks achieve
better results than linear models. However, their drawback is the black box
property. This property means that one with no experience utilising nonlinear
systems may need help understanding the outcome of the decision. Such a
solution is unacceptable to the user responsible for the final decision. He
must not only believe in the decision but also understand it. Therefore,
recognisers must have an architecture that allows interpreters to interpret the
findings. The idea of post-hoc explainable classifiers is to design an
interpretable classifier parallel to the black box classifier, giving the same
decisions as the black box classifier. This paper shows that the explainable
classifier completes matching classification decisions with the black box
classifier on the MNIST and FashionMNIST databases if Zadeh`s fuzzy logic
function forms the classifier and DeconvNet importance gives the truth values.
Since the other tested significance measures achieved lower performance than
DeconvNet, it is the optimal transformation of the feature values to their
truth values as inputs to the fuzzy logic function for the databases and
recogniser architecture used.
Related papers
- Knowledge Trees: Gradient Boosting Decision Trees on Knowledge Neurons
as Probing Classifier [0.0]
Logistic regression on the output representation of the transformer neural network layer is most often used to probing the syntactic properties of the language model.
We show that using gradient boosting decision trees at the Knowledge Neuron layer is more advantageous than using logistic regression on the output representations of the transformer layer.
arXiv Detail & Related papers (2023-12-17T15:37:03Z) - NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification [1.3812010983144802]
We present a novel framework called NeSyFOLD to create a neurosymbolic (NeSy) model for image classification tasks.
A rule-based machine learning algorithm called FOLD-SE-M is used to derive the stratified answer set program.
A justification for the predictions made by the NeSy model can be obtained using an ASP interpreter.
arXiv Detail & Related papers (2023-01-30T05:08:05Z) - A Set Membership Approach to Discovering Feature Relevance and
Explaining Neural Classifier Decisions [0.0]
This paper introduces a novel methodology for discovering which features are considered relevant by a trained neural classifier.
Although, feature relevance has received much attention in the machine learning literature here we reconsider it in terms of nonlinear parameter estimation.
arXiv Detail & Related papers (2022-04-05T14:25:11Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Fair Interpretable Learning via Correction Vectors [68.29997072804537]
We propose a new framework for fair representation learning centered around the learning of "correction vectors"
The corrections are then simply summed up to the original features, and can therefore be analyzed as an explicit penalty or bonus to each feature.
We show experimentally that a fair representation learning problem constrained in such a way does not impact performance.
arXiv Detail & Related papers (2022-01-17T10:59:33Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Classification and Feature Transformation with Fuzzy Cognitive Maps [0.3299672391663526]
Fuzzy Cognitive Maps (FCMs) are considered a soft computing technique combining elements of fuzzy logic and recurrent neural networks.
In this work we propose an FCM based classifier with a fully connected map structure.
Weights were learned with a gradient algorithm and logloss or cross-entropy were used as the cost function.
arXiv Detail & Related papers (2021-03-08T22:26:24Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - How do Decisions Emerge across Layers in Neural Models? Interpretation
with Differentiable Masking [70.92463223410225]
DiffMask learns to mask-out subsets of the input while maintaining differentiability.
Decision to include or disregard an input token is made with a simple model based on intermediate hidden layers.
This lets us not only plot attribution heatmaps but also analyze how decisions are formed across network layers.
arXiv Detail & Related papers (2020-04-30T17:36:14Z) - Deep Networks as Logical Circuits: Generalization and Interpretation [10.223907995092835]
We present a hierarchical decomposition of the Deep Neural Networks (DNNs) discrete classification map into logical (AND/OR) combinations of intermediate (True/False) classifiers of the input.
We show that the learned, internal, logical computations correspond to semantically meaningful categories that allow DNN descriptions in plain English.
arXiv Detail & Related papers (2020-03-25T20:39:53Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.