Practical Boolean Backpropagation
- URL: http://arxiv.org/abs/2505.03791v1
- Date: Thu, 01 May 2025 12:50:02 GMT
- Title: Practical Boolean Backpropagation
- Authors: Simon Golbert,
- Abstract summary: We present a practical method for purely Boolean backpropagation for networks based on a single specific gate we chose.<n>Initial experiments confirm its feasibility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Boolean neural networks offer hardware-efficient alternatives to real-valued models. While quantization is common, purely Boolean training remains underexplored. We present a practical method for purely Boolean backpropagation for networks based on a single specific gate we chose, operating directly in Boolean algebra involving no numerics. Initial experiments confirm its feasibility.
Related papers
- Boolean-aware Attention for Dense Retrieval [0.8192907805418583]
We present a novel attention mechanism that adjusts token focus based on Boolean operators (e.g., and, or, not)<n>Our model employs specialized Boolean experts, each tailored to amplify or suppress attention for operator-specific contexts.
arXiv Detail & Related papers (2025-03-03T17:23:08Z) - Boolean Logic as an Error feedback mechanism [0.5439020425819]
The notion of Boolean logic backpagation was introduced to build neural networks with weights and activations being Boolean numbers.
Most of computations can be done with logic instead of real arithmetic during training and phases.
arXiv Detail & Related papers (2024-01-29T18:56:21Z) - Boolean Variation and Boolean Logic BackPropagation [0.0]
The notion of variation is introduced for the Boolean set and based on which Boolean logic backpropagation principle is developed.
Deep models can be built with weights and activations being Boolean numbers and operated with Boolean logic instead of real arithmetic.
arXiv Detail & Related papers (2023-11-13T16:01:43Z) - Empower Nested Boolean Logic via Self-Supervised Curriculum Learning [67.46052028752327]
We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested logic.
To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method textitCurriculum Logical Reasoning (textscClr)
arXiv Detail & Related papers (2023-10-09T06:54:02Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Identifying nonclassicality from experimental data using artificial
neural networks [52.77024349608834]
We train an artificial neural network to classify classical and nonclassical states from their quadrature-measurement distributions.
We show that the network is able to correctly identify classical and nonclassical features from real experimental quadrature data for different states of light.
arXiv Detail & Related papers (2021-01-18T15:12:47Z) - Tunable Quantum Neural Networks for Boolean Functions [0.0]
We introduce the idea of a generic quantum circuit whose gates can be tuned to learn any Boolean functions.
In order to perform the learning task, we have devised an algorithm that leverages the absence of measurements.
arXiv Detail & Related papers (2020-03-31T11:55:01Z) - Distance-Based Regularisation of Deep Networks for Fine-Tuning [116.71288796019809]
We develop an algorithm that constrains a hypothesis class to a small sphere centred on the initial pre-trained weights.
Empirical evaluation shows that our algorithm works well, corroborating our theoretical results.
arXiv Detail & Related papers (2020-02-19T16:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.