Query languages for neural networks
- URL: http://arxiv.org/abs/2408.10362v2
- Date: Wed, 21 Aug 2024 12:50:01 GMT
- Title: Query languages for neural networks
- Authors: Martin Grohe, Christoph Standke, Juno Steegmans, Jan Van den Bussche,
- Abstract summary: We study different query languages, based on first-order logic, that mainly differ in their access to the neural network model.
First-order logic over the reals naturally yields a language which views the network as a black box.
A white-box language can be obtained by viewing the network as a weighted graph, and extending first-order logic with summation over weight terms.
- Score: 2.189522312470092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We lay the foundations for a database-inspired approach to interpreting and understanding neural network models by querying them using declarative languages. Towards this end we study different query languages, based on first-order logic, that mainly differ in their access to the neural network model. First-order logic over the reals naturally yields a language which views the network as a black box; only the input--output function defined by the network can be queried. This is essentially the approach of constraint query languages. On the other hand, a white-box language can be obtained by viewing the network as a weighted graph, and extending first-order logic with summation over weight terms. The latter approach is essentially an abstraction of SQL. In general, the two approaches are incomparable in expressive power, as we will show. Under natural circumstances, however, the white-box approach can subsume the black-box approach; this is our main result. We prove the result concretely for linear constraint queries over real functions definable by feedforward neural networks with a fixed number of hidden layers and piecewise linear activation functions.
Related papers
- Query Languages for Machine-Learning Models [7.343886246061387]
I discuss two logics for weighted finite structures.<n>I present illustrative examples of queries to neural networks that can be expressed in these logics.
arXiv Detail & Related papers (2026-01-14T11:15:09Z) - Recursive querying of neural networks via weighted structures [5.5784135176547025]
We investigate logics for weighted structures in feedforward neural networks.<n>We adopt a Datalog-like syntax and extend normal forms for fixpoint logics to weighted structures.<n>We show that very simple model-agnostic queries are already NP-complete.
arXiv Detail & Related papers (2026-01-06T17:30:44Z) - On the Limits of Hierarchically Embedded Logic in Classical Neural Networks [0.0]
We show that each layer can encode at most one additional level of logical reasoning.<n>We prove that a neural network of depth a particular depth cannot faithfully represent predicates in a one higher order logic.
arXiv Detail & Related papers (2025-07-28T16:13:41Z) - Concept-Guided Interpretability via Neural Chunking [64.6429903327095]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract recurring chunks on a neural population level.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - Towards Combinatorial Interpretability of Neural Computation [36.53010994384343]
We introduce interpretability, a methodology for understanding neural computation by analyzing the computation structures in the sign-based categorization of a network's weights and biases.
We demonstrate its power through feature channel coding, a theory that explains how neural networks compute Boolean expressions.
arXiv Detail & Related papers (2025-04-10T21:28:16Z) - Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Codebook Features: Sparse and Discrete Interpretability for Neural
Networks [43.06828312515959]
We explore whether we can train neural networks to have hidden states that are sparse, discrete, and more interpretable.
Codebook features are produced by finetuning neural networks with vector quantization bottlenecks at each layer.
We find that neural networks can operate under this extreme bottleneck with only modest degradation in performance.
arXiv Detail & Related papers (2023-10-26T08:28:48Z) - Empower Nested Boolean Logic via Self-Supervised Curriculum Learning [67.46052028752327]
We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested logic.
To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method textitCurriculum Logical Reasoning (textscClr)
arXiv Detail & Related papers (2023-10-09T06:54:02Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Learning Language Representations with Logical Inductive Bias [19.842271716111153]
We explore a new logical inductive bias for better language representation learning.
We develop a novel neural architecture named FOLNet to encode this new inductive bias.
We find that the self-attention module in transformers can be composed by two of our neural logic operators.
arXiv Detail & Related papers (2023-02-19T02:21:32Z) - Neural Methods for Logical Reasoning Over Knowledge Graphs [14.941769519278745]
We focus on answering multi-hop logical queries on Knowledge Graphs (KGs)
Most previous works have been unable to create models that accept full First-Order Logical (FOL) queries.
We introduce a set of models that use Neural Networks to create one-point vector embeddings to answer the queries.
arXiv Detail & Related papers (2022-09-28T23:10:09Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Rule Extraction from Binary Neural Networks with Convolutional Rules for
Model Validation [16.956140135868733]
We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN)
Our approach is based on rule extraction from binary neural networks with local search.
Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.
arXiv Detail & Related papers (2020-12-15T17:55:53Z) - Learning Syllogism with Euler Neural-Networks [20.47827965932698]
The central vector of a ball is a vector that can inherit representation power of traditional neural network.
A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises.
In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism.
arXiv Detail & Related papers (2020-07-14T19:35:35Z) - Linguistically Driven Graph Capsule Network for Visual Question
Reasoning [153.76012414126643]
We propose a hierarchical compositional reasoning model called the "Linguistically driven Graph Capsule Network"
The compositional process is guided by the linguistic parse tree. Specifically, we bind each capsule in the lowest layer to bridge the linguistic embedding of a single word in the original question with visual evidence.
Experiments on the CLEVR dataset, CLEVR compositional generation test, and FigureQA dataset demonstrate the effectiveness and composition generalization ability of our end-to-end model.
arXiv Detail & Related papers (2020-03-23T03:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.