Space Explanations of Neural Network Classification
- URL: http://arxiv.org/abs/2511.22498v1
- Date: Thu, 27 Nov 2025 14:33:59 GMT
- Title: Space Explanations of Neural Network Classification
- Authors: Faezeh Labbaf, Tomáš Kolárik, Martin Blicha, Grigory Fedyukovich, Michael Wand, Natasha Sharygina,
- Abstract summary: We present a novel logic-based concept called Space Explanations for classifying neural networks.<n>To automatically generate space explanations, we leverage a range of flexible Craig algorithms and unsatisfiable core generation.
- Score: 2.823533769284529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel logic-based concept called Space Explanations for classifying neural networks that gives provable guarantees of the behavior of the network in continuous areas of the input feature space. To automatically generate space explanations, we leverage a range of flexible Craig interpolation algorithms and unsatisfiable core generation. Based on real-life case studies, ranging from small to medium to large size, we demonstrate that the generated explanations are more meaningful than those computed by state-of-the-art.
Related papers
- Dense Neural Networks are not Universal Approximators [53.27010448621372]
We show that dense neural networks do not possess universality of arbitrary continuous functions.<n>We consider ReLU neural networks subject to natural constraints on weights and input and output dimensions.
arXiv Detail & Related papers (2026-02-07T16:52:38Z) - The Origins of Representation Manifolds in Large Language Models [52.68554895844062]
We show that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths.<n>The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.
arXiv Detail & Related papers (2025-05-23T13:31:22Z) - On Space Folds of ReLU Neural Networks [6.019268056469171]
Recent findings suggest that ReLU neural networks can be understood geometrically as space folding of the input space.<n>We present the first quantitative analysis of this space phenomenon in ReLU models.
arXiv Detail & Related papers (2025-02-14T07:22:24Z) - Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.<n>We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.<n>We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Leveraging Activations for Superpixel Explanations [2.8792218859042453]
Saliency methods have become standard in the explanation toolkit of deep neural networks.
In this paper, we aim to avoid relying on segmenters by extracting a segmentation from the activations of a deep neural network image classifier.
Our so-called Neuro-Activated Superpixels (NAS) can isolate the regions of interest in the input relevant to the model's prediction.
arXiv Detail & Related papers (2024-06-07T13:37:45Z) - Neural reproducing kernel Banach spaces and representer theorems for deep networks [14.902126718612648]
We show that deep neural networks define reproducing suitable kernel Banach spaces.<n>We derive representer theorems that justify the finite architectures commonly employed in applications.
arXiv Detail & Related papers (2024-03-13T17:51:02Z) - Conditional computation in neural networks: principles and research trends [48.14569369912931]
This article summarizes principles and ideas from the emerging area of applying textitconditional computation methods to the design of neural networks.
In particular, we focus on neural networks that can dynamically activate or de-activate parts of their computational graph conditionally on their input.
arXiv Detail & Related papers (2024-03-12T11:56:38Z) - Efficient compilation of expressive problem space specifications to
neural network solvers [0.0]
We describe an algorithm for compiling the former to the latter.
We explore and overcome complications that arise from targeting neural network solvers as opposed to standard SMT solvers.
arXiv Detail & Related papers (2024-01-24T09:13:09Z) - DepWiGNN: A Depth-wise Graph Neural Network for Multi-hop Spatial
Reasoning in Text [52.699307699505646]
We propose a novel Depth-Wise Graph Neural Network (DepWiGNN) to handle multi-hop spatial reasoning.
Specifically, we design a novel node memory scheme and aggregate the information over the depth dimension instead of the breadth dimension of the graph.
Experimental results on two challenging multi-hop spatial reasoning datasets show that DepWiGNN outperforms existing spatial reasoning methods.
arXiv Detail & Related papers (2023-10-19T08:07:22Z) - Finite-time Lyapunov exponents of deep neural networks [0.0]
We compute how small input perturbations affect the output of deep neural networks.
We show that the maximal exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems.
arXiv Detail & Related papers (2023-06-21T20:21:23Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.