eXpLogic: Explaining Logic Types and Patterns in DiffLogic Networks
- URL: http://arxiv.org/abs/2503.09910v1
- Date: Thu, 13 Mar 2025 00:01:36 GMT
- Title: eXpLogic: Explaining Logic Types and Patterns in DiffLogic Networks
- Authors: Stephen Wormald, David Koblah, Matheus Kunzler Maldaner, Domenic Forte, Damon L. Woodard,
- Abstract summary: We show an algorithm (eXpLogic) for producing saliency maps which explain input patterns that activate certain functions.<n>These maps help reduce the network size and inference times by 87% and 8%, respectively, while having a limited impact (-3.8%) on class-specific predictions.
- Score: 2.7542542931778864
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Constraining deep neural networks (DNNs) to learn individual logic types per node, as performed using the DiffLogic network architecture, opens the door to model-specific explanation techniques that quell the complexity inherent to DNNs. Inspired by principles of circuit analysis from computer engineering, this work presents an algorithm (eXpLogic) for producing saliency maps which explain input patterns that activate certain functions. The eXpLogic explanations: (1) show the exact set of inputs responsible for a decision, which helps interpret false negative and false positive predictions, (2) highlight common input patterns that activate certain outputs, and (3) help reduce the network size to improve class-specific inference. To evaluate the eXpLogic saliency map, we introduce a metric that quantifies how much an input changes before switching a model's class prediction (the SwitchDist) and use this metric to compare eXpLogic against the Vanilla Gradients (VG) and Integrated Gradient (IG) methods. Generally, we show that eXpLogic saliency maps are better at predicting which inputs will change the class score. These maps help reduce the network size and inference times by 87\% and 8\%, respectively, while having a limited impact (-3.8\%) on class-specific predictions. The broader value of this work to machine learning is in demonstrating how certain DNN architectures promote explainability, which is relevant to healthcare, defense, and law.
Related papers
- LASE: Learned Adjacency Spectral Embeddings [7.612218105739107]
We learn nodal Adjacency Spectral Embeddings (ASE) from graph inputs.<n>LASE is interpretable, parameter efficient, robust to inputs with unobserved edges.<n>LASE layers combine Graph Convolutional Network (GCN) and fully-connected Graph Attention Network (GAT) modules.
arXiv Detail & Related papers (2024-12-23T17:35:19Z) - Towards Rigorous Understanding of Neural Networks via
Semantics-preserving Transformations [0.0]
We present an approach to the precise and global verification and explanation of Rectifier Neural Networks.
Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures.
arXiv Detail & Related papers (2023-01-19T11:35:07Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Interpretable Neural Networks based classifiers for categorical inputs [0.0]
We introduce a simple way to interpret the output function of a neural network classifier that take as input categorical variables.
We show that in these cases each layer of the network, and the logits layer in particular, can be expanded as a sum of terms that account for the contribution to the classification of each input pattern.
The analysis of the contributions of each pattern, after an appropriate gauge transformation, is presented in two cases where the effectiveness of the method can be appreciated.
arXiv Detail & Related papers (2021-02-05T14:38:50Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Deep Networks as Logical Circuits: Generalization and Interpretation [10.223907995092835]
We present a hierarchical decomposition of the Deep Neural Networks (DNNs) discrete classification map into logical (AND/OR) combinations of intermediate (True/False) classifiers of the input.
We show that the learned, internal, logical computations correspond to semantically meaningful categories that allow DNN descriptions in plain English.
arXiv Detail & Related papers (2020-03-25T20:39:53Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.