WARP Logic Neural Networks
- URL: http://arxiv.org/abs/2602.03527v1
- Date: Tue, 03 Feb 2026 13:46:51 GMT
- Title: WARP Logic Neural Networks
- Authors: Lino Gerlach, Thore Gerlach, Liv VĂ¥ge, Elliott Kauffman, Isobel Ojalvo,
- Abstract summary: We introduce WAlsh Relaxation for Probabilistic (WARP) logic neural networks.<n>WARP is a gradient-based framework that efficiently learns combinations of hardware-native logic blocks.<n>We show that WARP yields the most parameter-efficient representation for exactly learning Boolean functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fast and efficient AI inference is increasingly important, and recent models that directly learn low-level logic operations have achieved state-of-the-art performance. However, existing logic neural networks incur high training costs, introduce redundancy or rely on approximate gradients, which limits scalability. To overcome these limitations, we introduce WAlsh Relaxation for Probabilistic (WARP) logic neural networks -- a novel gradient-based framework that efficiently learns combinations of hardware-native logic blocks. We show that WARP yields the most parameter-efficient representation for exactly learning Boolean functions and that several prior approaches arise as restricted special cases. Training is improved by introducing learnable thresholding and residual initialization, while we bridge the gap between relaxed training and discrete logic inference through stochastic smoothing. Experiments demonstrate faster convergence than state-of-the-art baselines, while scaling effectively to deeper architectures and logic functions with higher input arity.
Related papers
- DeepProofLog: Efficient Proving in Deep Stochastic Logic Programs [30.18197334181211]
This paper introduces DeepProofLog, a novel NeSy system based on logic programs.<n>DPrL parameterizes all derivation steps with neural networks, allowing efficient neural guidance over the proving system.<n>Our experiments on standard NeSy benchmarks and knowledge graph reasoning tasks demonstrate that DPrL outperforms existing state-of-the-art NeSy systems.
arXiv Detail & Related papers (2025-11-11T18:58:03Z) - WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables [0.0]
Walsh-Assisted Relaxation for Probabilistic Look-Up Tables (WARP-LUTs)<n>We introduce WARP-LUTs - a novel gradient-based method that efficiently learns combinations of logic gates with substantially fewer trainable parameters.<n>We demonstrate that WARP-LUTs achieve significantly faster convergence on CIFAR-10 compared to DLGNs, while maintaining comparable accuracy.
arXiv Detail & Related papers (2025-10-17T13:44:36Z) - Towards Narrowing the Generalization Gap in Deep Boolean Networks [3.230778132936486]
This paper explores strategies to enhance deep Boolean networks with the aim of surpassing their traditional counterparts.
We propose novel methods, including logical skip connections and spatiality preserving sampling, and validate them on vision tasks.
Our analysis shows how deep Boolean networks can maintain high performance while minimizing computational costs through 1-bit logic operations.
arXiv Detail & Related papers (2024-09-06T09:16:36Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.