Logic Gate Neural Networks are Good for Verification
- URL: http://arxiv.org/abs/2505.19932v1
- Date: Mon, 26 May 2025 12:59:33 GMT
- Title: Logic Gate Neural Networks are Good for Verification
- Authors: Fabian Kresse, Emily Yu, Christoph H. Lampert, Thomas A. Henzinger,
- Abstract summary: We introduce a SAT encoding for verifying global robustness and fairness in learned Logic Gate Networks (LGNs)<n>We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.
- Score: 20.84137106332268
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning-based systems are increasingly deployed across various domains, yet the complexity of traditional neural networks poses significant challenges for formal verification. Unlike conventional neural networks, learned Logic Gate Networks (LGNs) replace multiplications with Boolean logic gates, yielding a sparse, netlist-like architecture that is inherently more amenable to symbolic verification, while still delivering promising performance. In this paper, we introduce a SAT encoding for verifying global robustness and fairness in LGNs. We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.
Related papers
- NN-Former: Rethinking Graph Structure in Neural Architecture Representation [67.3378579108611]
Graph Neural Networks (GNNs) and transformers have shown promising performance in representing neural architectures.<n>We show that sibling nodes are pivotal while overlooked in previous research.<n>Our approach consistently achieves promising performance in both accuracy and latency prediction.
arXiv Detail & Related papers (2025-07-01T15:46:18Z) - Neural Networks as Universal Finite-State Machines: A Constructive Feedforward Simulation Framework for NFAs [0.0]
This work establishes a new bridge between symbolic automata theory and modern neural architectures.<n>We show that feedforward networks can perform precise, interpretable, and trainable symbolic computation.
arXiv Detail & Related papers (2025-05-30T01:18:35Z) - Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - Towards Narrowing the Generalization Gap in Deep Boolean Networks [3.230778132936486]
This paper explores strategies to enhance deep Boolean networks with the aim of surpassing their traditional counterparts.
We propose novel methods, including logical skip connections and spatiality preserving sampling, and validate them on vision tasks.
Our analysis shows how deep Boolean networks can maintain high performance while minimizing computational costs through 1-bit logic operations.
arXiv Detail & Related papers (2024-09-06T09:16:36Z) - Learning Interpretable Differentiable Logic Networks [3.8064485653035987]
We introduce a novel method for learning interpretable differentiable logic networks (DLNs)
We train these networks by softening and differentiating their discrete components, through binarization of inputs, binary logic operations, and connections between neurons.
Experimental results on twenty classification tasks indicate that differentiable logic networks can achieve accuracies comparable to or exceeding that of traditional NNs.
arXiv Detail & Related papers (2024-07-04T21:58:26Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Applications of Lattice Gauge Equivariant Neural Networks [0.0]
Lattice Gauge Equivariant Convolutional Neural Networks (L-CNNs)
L-CNNs can generalize better to differently sized lattices than traditional neural networks.
We present our progress on possible applications of L-CNNs to Wilson flow or continuous normalizing flow.
arXiv Detail & Related papers (2022-12-01T19:32:42Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - A Lightweight, Efficient and Explainable-by-Design Convolutional Neural
Network for Internet Traffic Classification [9.365794791156972]
This paper introduces a new Lightweight, Efficient and eXplainable-by-design convolutional neural network (LEXNet) for Internet traffic classification.
LEXNet relies on a new residual block (for lightweight and efficiency purposes) and prototype layer (for explainability)
Based on a commercial-grade dataset, our evaluation shows that LEXNet succeeds to maintain the same accuracy as the best performing state-of-the-art neural network.
arXiv Detail & Related papers (2022-02-11T10:21:34Z) - Neural Network Branch-and-Bound for Neural Network Verification [26.609606492971967]
We propose a novel machine learning framework that can be used for designing an effective branching strategy.
We learn two graph neural networks (GNNs) that both directly treat the network we want to verify as a graph input.
We show that our GNN models generalize well to harder properties on larger unseen networks.
arXiv Detail & Related papers (2021-07-27T14:42:57Z) - An SMT-Based Approach for Verifying Binarized Neural Networks [1.4394939014120451]
We propose an SMT-based technique for verifying Binarized Neural Networks.
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
arXiv Detail & Related papers (2020-11-05T16:21:26Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.