LILogic Net: Compact Logic Gate Networks with Learnable Connectivity for Efficient Hardware Deployment
- URL: http://arxiv.org/abs/2511.12340v1
- Date: Sat, 15 Nov 2025 19:44:37 GMT
- Title: LILogic Net: Compact Logic Gate Networks with Learnable Connectivity for Efficient Hardware Deployment
- Authors: Katarzyna Fojcik, Renaldas Zioma, Jogundas Armaitis,
- Abstract summary: We show how to train networks of binary logic gates using gradient-based methods.<n>We show how to substantially reduce the number of logic gates required to fit a particular dataset.<n>For our largest architecture with 256,000 gates, LILogicNet achieves 60.98% test accuracy on CIFAR-10.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient deployment of machine learning models ultimately requires taking hardware constraints into account. The binary logic gate is the fundamental building block of all digital chips. Designing models that operate directly on these units enables energy-efficient computation. Recent work has demonstrated the feasibility of training randomly connected networks of binary logic gates (such as OR and NAND) using gradient-based methods. We extend this approach by using gradient descent not only to select the logic gates but also to optimize their interconnections (the connectome). Optimizing the connections allows us to substantially reduce the number of logic gates required to fit a particular dataset. Our implementation is efficient both at training and inference: for instance, our LILogicNet model with only 8,000 gates can be trained on MNIST in under 5 minutes and achieves 98.45% test accuracy, matching the performance of state-of-the-art models that require at least two orders of magnitude more gates. Moreover, for our largest architecture with 256,000 gates, LILogicNet achieves 60.98% test accuracy on CIFAR-10 exceeding the performance of prior logic-gate-based models with a comparable gate budget. At inference time, the fully binarized model operates with minimal compute overhead, making it exceptionally efficient and well suited for deployment on low-power digital hardware.
Related papers
- Hardware Co-Design Scaling Laws via Roofline Modelling for On-Device LLMs [49.99513618431772]
We propose a hardware co-design law that captures model accuracy and inference performance.<n>We empirically evaluate 1,942 candidate architectures on NVIDIA Jetson Orin.<n>Our architecture achieves 19.42% lower perplexity on WikiText-2.
arXiv Detail & Related papers (2026-02-10T23:51:00Z) - WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables [0.0]
Walsh-Assisted Relaxation for Probabilistic Look-Up Tables (WARP-LUTs)<n>We introduce WARP-LUTs - a novel gradient-based method that efficiently learns combinations of logic gates with substantially fewer trainable parameters.<n>We demonstrate that WARP-LUTs achieve significantly faster convergence on CIFAR-10 compared to DLGNs, while maintaining comparable accuracy.
arXiv Detail & Related papers (2025-10-17T13:44:36Z) - A Method for Optimizing Connections in Differentiable Logic Gate Networks [0.48212500317840945]
We introduce a novel method for partial optimization of the connections in Deep Differentiable Logic Gate Networks (LGNs)<n>Our training method utilizes a probability distribution over a subset of connections per gate input, selecting the connection with highest merit, after which the gate-types are selected.<n>We show that the connection-optimized LGNs outperform standard fixed-connection LGNs on the Yin-Yang, MNIST and Fashion-MNIST benchmarks, while requiring only a fraction of the number of logic gates.
arXiv Detail & Related papers (2025-07-08T16:53:39Z) - Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Efficient fault-tolerant code switching via one-way transversal CNOT gates [0.0]
We present a code scheme that respects the constraints of FT circuit design by only making use of switching gates.<n>We analyze application of the scheme to low-distance color codes, which are suitable for operation in existing quantum processors.<n>We discuss how the scheme can be implemented with a large degree of parallelization, provided that logical auxiliary qubits can be prepared reliably enough.
arXiv Detail & Related papers (2024-09-20T12:54:47Z) - Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers [56.37495946212932]
Vision transformers (ViTs) have demonstrated their superior accuracy for computer vision tasks compared to convolutional neural networks (CNNs)
This work proposes Quasar-ViT, a hardware-oriented quantization-aware architecture search framework for ViTs.
arXiv Detail & Related papers (2024-07-25T16:35:46Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Direct pulse-level compilation of arbitrary quantum logic gates on superconducting qutrits [36.30869856057226]
We demonstrate any arbitrary qubit and qutrit gate can be realized with high-fidelity, which can significantly reduce the length of a gate sequence.
We show that optimal control gates are robust to drift for at least three hours and that the same calibration parameters can be used for all implemented gates.
arXiv Detail & Related papers (2023-03-07T22:15:43Z) - Deep Differentiable Logic Gate Networks [29.75063301688965]
We explore logic gate networks for machine learning tasks by learning combinations of logic gates.
We propose differentiable logic gate networks that combine real-valued logics and a continuously parameterized relaxation of the network.
The resulting discretized logic gate networks achieve fast inference speeds beyond a million images of MNIST per second on a single CPU core.
arXiv Detail & Related papers (2022-10-15T12:50:04Z) - Logical blocks for fault-tolerant topological quantum computation [55.41644538483948]
We present a framework for universal fault-tolerant logic motivated by the need for platform-independent logical gate definitions.
We explore novel schemes for universal logic that improve resource overheads.
Motivated by the favorable logical error rates for boundaryless computation, we introduce a novel computational scheme.
arXiv Detail & Related papers (2021-12-22T19:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.