A Method for Optimizing Connections in Differentiable Logic Gate Networks
- URL: http://arxiv.org/abs/2507.06173v1
- Date: Tue, 08 Jul 2025 16:53:39 GMT
- Title: A Method for Optimizing Connections in Differentiable Logic Gate Networks
- Authors: Wout Mommen, Lars Keuninckx, Matthias Hartmann, Piet Wambacq,
- Abstract summary: We introduce a novel method for partial optimization of the connections in Deep Differentiable Logic Gate Networks (LGNs)<n>Our training method utilizes a probability distribution over a subset of connections per gate input, selecting the connection with highest merit, after which the gate-types are selected.<n>We show that the connection-optimized LGNs outperform standard fixed-connection LGNs on the Yin-Yang, MNIST and Fashion-MNIST benchmarks, while requiring only a fraction of the number of logic gates.
- Score: 0.48212500317840945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel method for partial optimization of the connections in Deep Differentiable Logic Gate Networks (LGNs). Our training method utilizes a probability distribution over a subset of connections per gate input, selecting the connection with highest merit, after which the gate-types are selected. We show that the connection-optimized LGNs outperform standard fixed-connection LGNs on the Yin-Yang, MNIST and Fashion-MNIST benchmarks, while requiring only a fraction of the number of logic gates. When training all connections, we demonstrate that 8000 simple logic gates are sufficient to achieve over 98% on the MNIST data set. Additionally, we show that our network has 24 times fewer gates, while performing better on the MNIST data set compared to standard fully connected LGNs. As such, our work shows a pathway towards fully trainable Boolean logic.
Related papers
- SparseLUT: Sparse Connectivity Optimization for Lookup Table-based Deep Neural Networks [0.0]
This paper introduces SparseLUT, a connectivity-centric training technique tailored for LUT-based deep neural networks (DNNs)<n> Experimental results show consistent accuracy improvements across benchmarks, including up to a 2.13% increase on MNIST.<n>This is done without any hardware overhead and achieves state-of-the-art results for LUT-based DNNs.
arXiv Detail & Related papers (2025-03-17T05:21:54Z) - Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - Efficient fault-tolerant code switching via one-way transversal CNOT gates [0.0]
We present a code scheme that respects the constraints of FT circuit design by only making use of switching gates.
We analyze application of the scheme to low-distance color codes, which are suitable for operation in existing quantum processors.
We discuss how the scheme can be implemented with a large degree of parallelization, provided that logical auxiliary qubits can be prepared reliably enough.
arXiv Detail & Related papers (2024-09-20T12:54:47Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - COMET: Learning Cardinality Constrained Mixture of Experts with Trees
and Local Search [10.003251119927222]
Mixture-of-Experts (Sparse-MoE) framework efficiently scales up model capacity in various domains.
Existing sparse gates are prone to convergence and performance issues when training with first-order optimization methods.
We propose a new sparse gate: COMET, which relies on a novel tree-based mechanism.
arXiv Detail & Related papers (2023-06-05T12:21:42Z) - Direct pulse-level compilation of arbitrary quantum logic gates on superconducting qutrits [36.30869856057226]
We demonstrate any arbitrary qubit and qutrit gate can be realized with high-fidelity, which can significantly reduce the length of a gate sequence.
We show that optimal control gates are robust to drift for at least three hours and that the same calibration parameters can be used for all implemented gates.
arXiv Detail & Related papers (2023-03-07T22:15:43Z) - Deep Differentiable Logic Gate Networks [29.75063301688965]
We explore logic gate networks for machine learning tasks by learning combinations of logic gates.
We propose differentiable logic gate networks that combine real-valued logics and a continuously parameterized relaxation of the network.
The resulting discretized logic gate networks achieve fast inference speeds beyond a million images of MNIST per second on a single CPU core.
arXiv Detail & Related papers (2022-10-15T12:50:04Z) - Robustness of a universal gate set implementation in transmon systems
via Chopped Random Basis optimal control [50.591267188664666]
We numerically study the implementation of a universal two-qubit gate set, composed of CNOT, Hadamard, phase and $pi/8$ gates, for transmon-based systems.
The control signals to implement such gates are obtained using the Chopped Random Basis optimal control technique, with a target gate infidelity of $10-2$.
arXiv Detail & Related papers (2022-07-27T10:55:15Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Purification and Entanglement Routing on Quantum Networks [55.41644538483948]
A quantum network equipped with imperfect channel fidelities and limited memory storage time can distribute entanglement between users.
We introduce effectives enabling fast path-finding algorithms for maximizing entanglement shared between two nodes on a quantum network.
arXiv Detail & Related papers (2020-11-23T19:00:01Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.