E Pluribus Unum Interpretable Convolutional Neural Networks
- URL: http://arxiv.org/abs/2208.05369v1
- Date: Wed, 10 Aug 2022 14:37:03 GMT
- Title: E Pluribus Unum Interpretable Convolutional Neural Networks
- Authors: George Dimas, Eirini Cholopoulou and Dimitris K. Iakovidis
- Abstract summary: We develop a novel framework for instantiating inherently interpretable CNN models, named E Pluribus Unum Interpretable CNN (EPU-CNN)
An EPU-CNN model consists of CNN sub-networks, each of which receives a different representation of an input image expressing a perceptual feature, such as color or texture.
We show that EPU-CNN models can achieve a comparable or better classification performance than other CNN architectures while providing humanly perceivable interpretations.
- Score: 6.45481313278967
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The adoption of Convolutional Neural Network (CNN) models in high-stake
domains is hindered by their inability to meet society's demand for
transparency in decision-making. So far, a growing number of methodologies have
emerged for developing CNN models that are interpretable by design. However,
such models are not capable of providing interpretations in accordance with
human perception, while maintaining competent performance. In this paper, we
tackle these challenges with a novel, general framework for instantiating
inherently interpretable CNN models, named E Pluribus Unum Interpretable CNN
(EPU-CNN). An EPU-CNN model consists of CNN sub-networks, each of which
receives a different representation of an input image expressing a perceptual
feature, such as color or texture. The output of an EPU-CNN model consists of
the classification prediction and its interpretation, in terms of relative
contributions of perceptual features in different regions of the input image.
EPU-CNN models have been extensively evaluated on various publicly available
datasets, as well as a contributed benchmark dataset. Medical datasets are used
to demonstrate the applicability of EPU-CNN for risk-sensitive decisions in
medicine. The experimental results indicate that EPU-CNN models can achieve a
comparable or better classification performance than other CNN architectures
while providing humanly perceivable interpretations.
Related papers
- GINN-KAN: Interpretability pipelining with applications in Physics Informed Neural Networks [5.2969467015867915]
We introduce the concept of interpretability pipelineing, to incorporate multiple interpretability techniques to outperform each individual technique.
We evaluate two recent models selected for their potential to incorporate interpretability into standard neural network architectures.
We introduce a novel interpretable neural network GINN-KAN that synthesizes the advantages of both models.
arXiv Detail & Related papers (2024-08-27T04:57:53Z) - Heterogeneous Federated Learning with Convolutional and Spiking Neural Networks [17.210940028586588]
Federated learning (FL) has emerged as a promising paradigm for training models on decentralized data.
This work benchmarks FL systems containing both convoluntional neural networks (CNNs) and biologically more plausible spiking neural networks (SNNs)
Experimental results demonstrate that the CNN-SNN fusion framework exhibits the best performance.
arXiv Detail & Related papers (2024-06-14T03:05:05Z) - A Quantum Convolutional Neural Network Approach for Object Detection and
Classification [0.0]
The time and accuracy of QCNNs are compared with classical CNNs and ANN models under different conditions.
The analysis shows that QCNNs have the potential to outperform both classical CNNs and ANN models in terms of accuracy and efficiency for certain applications.
arXiv Detail & Related papers (2023-07-17T02:38:04Z) - Hybrid CNN -Interpreter: Interpret local and global contexts for
CNN-based Models [9.148791330175191]
Convolutional neural network (CNN) models have seen advanced improvements in performance in various domains.
Lack of interpretability is a major barrier to assurance and regulation during operation for acceptance and deployment of AI-assisted applications.
We propose a novel hybrid CNN-interpreter through:.
An original forward propagation mechanism to examine the layer-specific prediction results for local interpretability.
A new global interpretability that indicates the feature correlation and filter importance effects.
arXiv Detail & Related papers (2022-10-31T22:59:33Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - Combining Discrete Choice Models and Neural Networks through Embeddings:
Formulation, Interpretability and Performance [10.57079240576682]
This study proposes a novel approach that combines theory and data-driven choice models using Artificial Neural Networks (ANNs)
In particular, we use continuous vector representations, called embeddings, for encoding categorical or discrete explanatory variables.
Our models deliver state-of-the-art predictive performance, outperforming existing ANN-based models while drastically reducing the number of required network parameters.
arXiv Detail & Related papers (2021-09-24T15:55:31Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - ECINN: Efficient Counterfactuals from Invertible Neural Networks [80.94500245955591]
We propose a method, ECINN, that utilizes the generative capacities of invertible neural networks for image classification to generate counterfactual examples efficiently.
ECINN has a closed-form expression and generates a counterfactual in the time of only two evaluations.
Our experiments demonstrate how ECINN alters class-dependent image regions to change the perceptual and predicted class of the counterfactuals.
arXiv Detail & Related papers (2021-03-25T09:23:24Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.