Recurrent Deep Differentiable Logic Gate Networks
- URL: http://arxiv.org/abs/2508.06097v1
- Date: Fri, 08 Aug 2025 07:49:38 GMT
- Title: Recurrent Deep Differentiable Logic Gate Networks
- Authors: Simon Bührer, Andreas Plesner, Till Aczel, Roger Wattenhofer,
- Abstract summary: This paper presents the first implementation of Recurrent Deep Differentiable Logic Gate Networks (RDDLGN)<n>It achieves 5.00 BLEU and 30.9% accuracy during training, approaching GRU performance (5.41 BLEU) and graceful degradation degradation (4.39 BLEU) during inference.<n>This work establishes recurrent logic-based neural computation as viable, opening research directions for FPGA acceleration in sequential modeling.
- Score: 18.95453617434051
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: While differentiable logic gates have shown promise in feedforward networks, their application to sequential modeling remains unexplored. This paper presents the first implementation of Recurrent Deep Differentiable Logic Gate Networks (RDDLGN), combining Boolean operations with recurrent architectures for sequence-to-sequence learning. Evaluated on WMT'14 English-German translation, RDDLGN achieves 5.00 BLEU and 30.9\% accuracy during training, approaching GRU performance (5.41 BLEU) and graceful degradation (4.39 BLEU) during inference. This work establishes recurrent logic-based neural computation as viable, opening research directions for FPGA acceleration in sequential modeling and other recursive network architectures.
Related papers
- WARP Logic Neural Networks [0.0]
We introduce WAlsh Relaxation for Probabilistic (WARP) logic neural networks.<n>WARP is a gradient-based framework that efficiently learns combinations of hardware-native logic blocks.<n>We show that WARP yields the most parameter-efficient representation for exactly learning Boolean functions.
arXiv Detail & Related papers (2026-02-03T13:46:51Z) - Learning Interpretable Differentiable Logic Networks for Tabular Regression [3.8064485653035987]
Differentiable Logic Networks (DLNs) offer interpretable reasoning and substantially lower inference cost.<n>We extend the DLN framework to supervised regression. Specifically, we redesign the final output layer to support continuous targets and unify the original two-phase training procedure into a single differentiable stage.<n>Our results show that DLNs are a viable, cost-effective alternative for regression tasks, especially where model transparency and computational efficiency is important.
arXiv Detail & Related papers (2025-05-29T16:24:18Z) - Logic Gate Neural Networks are Good for Verification [20.84137106332268]
We introduce a SAT encoding for verifying global robustness and fairness in learned Logic Gate Networks (LGNs)<n>We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.
arXiv Detail & Related papers (2025-05-26T12:59:33Z) - Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - DeepSeq2: Enhanced Sequential Circuit Learning with Disentangled Representations [9.79382991471473]
We introduce DeepSeq2, a novel framework that enhances the learning of sequential circuits.
By employing an efficient Directed Acyclic Graph Neural Network (DAG-GNN), DeepSeq2 significantly reduces execution times and improves model scalability.
DeepSeq2 sets a new benchmark in sequential circuit representation learning, outperforming prior works in power estimation and reliability analysis.
arXiv Detail & Related papers (2024-11-01T11:57:42Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Deep Differentiable Logic Gate Networks [29.75063301688965]
We explore logic gate networks for machine learning tasks by learning combinations of logic gates.
We propose differentiable logic gate networks that combine real-valued logics and a continuously parameterized relaxation of the network.
The resulting discretized logic gate networks achieve fast inference speeds beyond a million images of MNIST per second on a single CPU core.
arXiv Detail & Related papers (2022-10-15T12:50:04Z) - DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator
Search [55.164053971213576]
convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead.
Structured (channel) pruning is usually applied to reduce the model redundancy while preserving the network structure.
Existing structured pruning methods require hand-crafted rules which may lead to tremendous pruning space.
arXiv Detail & Related papers (2020-11-04T07:43:01Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Refined Gate: A Simple and Effective Gating Mechanism for Recurrent
Units [68.30422112784355]
We propose a new gating mechanism within general gated recurrent neural networks to handle this issue.
The proposed gates directly short connect the extracted input features to the outputs of vanilla gates.
We verify the proposed gating mechanism on three popular types of gated RNNs including LSTM, GRU and MGU.
arXiv Detail & Related papers (2020-02-26T07:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.