Rule-Extraction Methods From Feedforward Neural Networks: A Systematic
Literature Review
- URL: http://arxiv.org/abs/2312.12878v1
- Date: Wed, 20 Dec 2023 09:40:07 GMT
- Title: Rule-Extraction Methods From Feedforward Neural Networks: A Systematic
Literature Review
- Authors: Sara El Mekkaoui, Loubna Benabbou, Abdelaziz Berrado
- Abstract summary: Rules offer a transparent and intuitive means of explaining neural networks.
The study specifically addresses feedforward networks with supervised learning and crisp rules.
Future work can extend to other network types, machine learning methods, and fuzzy rule extraction.
- Score: 1.1510009152620668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by the interpretability question in ML models as a crucial element
for the successful deployment of AI systems, this paper focuses on rule
extraction as a means for neural networks interpretability. Through a
systematic literature review, different approaches for extracting rules from
feedforward neural networks, an important block in deep learning models, are
identified and explored. The findings reveal a range of methods developed for
over two decades, mostly suitable for shallow neural networks, with recent
developments to meet deep learning models' challenges. Rules offer a
transparent and intuitive means of explaining neural networks, making this
study a comprehensive introduction for researchers interested in the field.
While the study specifically addresses feedforward networks with supervised
learning and crisp rules, future work can extend to other network types,
machine learning methods, and fuzzy rule extraction.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - When Deep Learning Meets Polyhedral Theory: A Survey [6.899761345257773]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.
Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - A Comprehensive Survey on Community Detection with Deep Learning [93.40332347374712]
A community reveals the features and connections of its members that are different from those in other communities in a network.
This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods.
The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders.
arXiv Detail & Related papers (2021-05-26T14:37:07Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Deep Neural Networks and Neuro-Fuzzy Networks for Intellectual Analysis
of Economic Systems [0.0]
We consider approaches for time series forecasting based on deep neural networks and neuro-fuzzy nets.
This paper presents also an overview of approaches for incorporating rule-based methodology into deep learning neural networks.
arXiv Detail & Related papers (2020-11-11T06:21:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.