WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables
- URL: http://arxiv.org/abs/2510.15655v1
- Date: Fri, 17 Oct 2025 13:44:36 GMT
- Title: WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables
- Authors: Lino Gerlach, Liv VĂ¥ge, Thore Gerlach, Elliott Kauffman,
- Abstract summary: Walsh-Assisted Relaxation for Probabilistic Look-Up Tables (WARP-LUTs)<n>We introduce WARP-LUTs - a novel gradient-based method that efficiently learns combinations of logic gates with substantially fewer trainable parameters.<n>We demonstrate that WARP-LUTs achieve significantly faster convergence on CIFAR-10 compared to DLGNs, while maintaining comparable accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fast and efficient machine learning is of growing interest to the scientific community and has spurred significant research into novel model architectures and hardware-aware design. Recent hard? and software co-design approaches have demonstrated impressive results with entirely multiplication-free models. Differentiable Logic Gate Networks (DLGNs), for instance, provide a gradient-based framework for learning optimal combinations of low-level logic gates, setting state-of-the-art trade-offs between accuracy, resource usage, and latency. However, these models suffer from high computational cost during training and do not generalize well to logic blocks with more inputs. In this work, we introduce Walsh-Assisted Relaxation for Probabilistic Look-Up Tables (WARP-LUTs) - a novel gradient-based method that efficiently learns combinations of logic gates with substantially fewer trainable parameters. We demonstrate that WARP-LUTs achieve significantly faster convergence on CIFAR-10 compared to DLGNs, while maintaining comparable accuracy. Furthermore, our approach suggests potential for extension to higher-input logic blocks, motivating future research on extremely efficient deployment on modern FPGAs and its real-time science applications.
Related papers
- WARP Logic Neural Networks [0.0]
We introduce WAlsh Relaxation for Probabilistic (WARP) logic neural networks.<n>WARP is a gradient-based framework that efficiently learns combinations of hardware-native logic blocks.<n>We show that WARP yields the most parameter-efficient representation for exactly learning Boolean functions.
arXiv Detail & Related papers (2026-02-03T13:46:51Z) - When Bayesian Tensor Completion Meets Multioutput Gaussian Processes: Functional Universality and Rank Learning [53.17227599983122]
Functional tensor decomposition can analyze multi-dimensional data with real-valued indices.<n>We propose a rank-revealing functional low-rank tensor completion (RR-F) method.<n>We establish the universal approximation property of the model for continuous multi-dimensional signals.
arXiv Detail & Related papers (2025-12-25T03:15:52Z) - LILogic Net: Compact Logic Gate Networks with Learnable Connectivity for Efficient Hardware Deployment [0.0]
We show how to train networks of binary logic gates using gradient-based methods.<n>We show how to substantially reduce the number of logic gates required to fit a particular dataset.<n>For our largest architecture with 256,000 gates, LILogicNet achieves 60.98% test accuracy on CIFAR-10.
arXiv Detail & Related papers (2025-11-15T19:44:37Z) - Lattice Annotated Temporal (LAT) Logic for Non-Markovian Reasoning [0.20878935665163192]
LAT Logic is an extension of Generalized Annotated Logic Programs (GAPs)<n>LAT Logic supports open-world semantics through the use of a lower lattice structure.<n>Our open-source implementation, PyReason, features modular design, machine-level optimizations, and direct integration with reinforcement learning environments.
arXiv Detail & Related papers (2025-09-03T02:45:34Z) - TreeLoRA: Efficient Continual Learning via Layer-Wise LoRAs Guided by a Hierarchical Gradient-Similarity Tree [52.44403214958304]
In this paper, we introduce TreeLoRA, a novel approach that constructs layer-wise adapters by leveraging hierarchical gradient similarity.<n>To reduce the computational burden of task similarity estimation, we employ bandit techniques to develop an algorithm based on lower confidence bounds.<n> experiments on both vision transformers (ViTs) and large language models (LLMs) demonstrate the effectiveness and efficiency of our approach.
arXiv Detail & Related papers (2025-06-12T05:25:35Z) - Learning in Log-Domain: Subthreshold Analog AI Accelerator Based on Stochastic Gradient Descent [5.429033337081392]
We propose a novel analog accelerator architecture for AI/ML training workloads using gradient descent with L2 regularization (SGDr)<n>The proposed design achieves significant reductions in transistor area and power consumption compared to digital implementations.<n>This work paves the way for energy-efficient analog AI hardware with on-chip training capabilities.
arXiv Detail & Related papers (2025-01-22T19:26:36Z) - TreeLUT: An Efficient Alternative to Deep Neural Networks for Inference Acceleration Using Gradient Boosted Decision Trees [0.6906005491572401]
We present TreeLUT, an open-source tool for implementing gradient boosted decision trees (GBDTs) on FPGAs.<n>We show the effectiveness of TreeLUT using multiple datasets classification, commonly used to evaluate ultra-low area and latency.<n>Our results show that TreeLUT significantly improves hardware utilization, latency, and throughput at competitive accuracy compared to previous works.
arXiv Detail & Related papers (2025-01-02T19:38:07Z) - Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - Enhancing Dropout-based Bayesian Neural Networks with Multi-Exit on FPGA [20.629635991749808]
This paper proposes an algorithm and hardware co-design framework that can generate field-programmable gate array (FPGA)-based accelerators for efficient BayesNNs.
At the algorithm level, we propose novel multi-exit dropout-based BayesNNs with reduced computational and memory overheads.
At the hardware level, this paper introduces a transformation framework that can generate FPGA-based accelerators for the proposed efficient BayesNNs.
arXiv Detail & Related papers (2024-06-20T17:08:42Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a curriculum-based logical-aware instruction tuning framework, named LACT.<n>Specifically, we augment the arbitrary first-order logical queries via binary tree decomposition.<n> Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods, achieving the new state-of-the-art.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - MATADOR: Automated System-on-Chip Tsetlin Machine Design Generation for Edge Applications [0.2663045001864042]
This paper presents MATADOR, an automated-to-silicon tool with GUI interface capable of optimized accelerator design for inference at the edge.
It offers automation of the full development pipeline: model training, system level design generation, design verification and deployment.
MATADOR accelerator designs are shown to be up to 13.4x faster, up to 7x more resource frugal and up to 2x more power efficient when compared to state-of-the-art Quantized and Binary Deep Neural Network implementations.
arXiv Detail & Related papers (2024-03-03T10:31:46Z) - Logical blocks for fault-tolerant topological quantum computation [55.41644538483948]
We present a framework for universal fault-tolerant logic motivated by the need for platform-independent logical gate definitions.
We explore novel schemes for universal logic that improve resource overheads.
Motivated by the favorable logical error rates for boundaryless computation, we introduce a novel computational scheme.
arXiv Detail & Related papers (2021-12-22T19:00:03Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.