Deep Differentiable Logic Gate Networks
- URL: http://arxiv.org/abs/2210.08277v1
- Date: Sat, 15 Oct 2022 12:50:04 GMT
- Title: Deep Differentiable Logic Gate Networks
- Authors: Felix Petersen, Christian Borgelt, Hilde Kuehne, Oliver Deussen
- Abstract summary: We explore logic gate networks for machine learning tasks by learning combinations of logic gates.
We propose differentiable logic gate networks that combine real-valued logics and a continuously parameterized relaxation of the network.
The resulting discretized logic gate networks achieve fast inference speeds beyond a million images of MNIST per second on a single CPU core.
- Score: 29.75063301688965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, research has increasingly focused on developing efficient neural
network architectures. In this work, we explore logic gate networks for machine
learning tasks by learning combinations of logic gates. These networks comprise
logic gates such as "AND" and "XOR", which allow for very fast execution. The
difficulty in learning logic gate networks is that they are conventionally
non-differentiable and therefore do not allow training with gradient descent.
Thus, to allow for effective training, we propose differentiable logic gate
networks, an architecture that combines real-valued logics and a continuously
parameterized relaxation of the network. The resulting discretized logic gate
networks achieve fast inference speeds, e.g., beyond a million images of MNIST
per second on a single CPU core.
Related papers
- Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - Towards Narrowing the Generalization Gap in Deep Boolean Networks [3.230778132936486]
This paper explores strategies to enhance deep Boolean networks with the aim of surpassing their traditional counterparts.
We propose novel methods, including logical skip connections and spatiality preserving sampling, and validate them on vision tasks.
Our analysis shows how deep Boolean networks can maintain high performance while minimizing computational costs through 1-bit logic operations.
arXiv Detail & Related papers (2024-09-06T09:16:36Z) - Learning Interpretable Differentiable Logic Networks [3.8064485653035987]
We introduce a novel method for learning interpretable differentiable logic networks (DLNs)
We train these networks by softening and differentiating their discrete components, through binarization of inputs, binary logic operations, and connections between neurons.
Experimental results on twenty classification tasks indicate that differentiable logic networks can achieve accuracies comparable to or exceeding that of traditional NNs.
arXiv Detail & Related papers (2024-07-04T21:58:26Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - DeepGate2: Functionality-Aware Circuit Representation Learning [10.75166513491573]
Circuit representation learning aims to obtain neural representations of circuit elements.
Existing solutions, such as DeepGate, have the potential to embed both circuit structural information and functional behavior.
We introduce DeepGate2, a novel functionality-aware learning framework.
arXiv Detail & Related papers (2023-05-25T13:51:12Z) - Logical blocks for fault-tolerant topological quantum computation [55.41644538483948]
We present a framework for universal fault-tolerant logic motivated by the need for platform-independent logical gate definitions.
We explore novel schemes for universal logic that improve resource overheads.
Motivated by the favorable logical error rates for boundaryless computation, we introduce a novel computational scheme.
arXiv Detail & Related papers (2021-12-22T19:00:03Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - LogicNets: Co-Designed Neural Networks and Circuits for
Extreme-Throughput Applications [6.9276012494882835]
We present a novel method for designing neural network topologies that directly map to a highly efficient FPGA implementation.
We show that the combination of sparsity and low-bit activation quantization results in high-speed circuits with small logic depth and low LUT cost.
arXiv Detail & Related papers (2020-04-06T22:15:41Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.