Boolean Variation and Boolean Logic BackPropagation
- URL: http://arxiv.org/abs/2311.07427v2
- Date: Tue, 7 May 2024 08:02:37 GMT
- Title: Boolean Variation and Boolean Logic BackPropagation
- Authors: Van Minh Nguyen,
- Abstract summary: The notion of variation is introduced for the Boolean set and based on which Boolean logic backpropagation principle is developed.
Deep models can be built with weights and activations being Boolean numbers and operated with Boolean logic instead of real arithmetic.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion of variation is introduced for the Boolean set and based on which Boolean logic backpropagation principle is developed. Using this concept, deep models can be built with weights and activations being Boolean numbers and operated with Boolean logic instead of real arithmetic. In particular, Boolean deep models can be trained directly in the Boolean domain without latent weights. No gradient but logic is synthesized and backpropagated through layers.
Related papers
- Practical Boolean Backpropagation [0.0]
We present a practical method for purely Boolean backpropagation for networks based on a single specific gate we chose.<n>Initial experiments confirm its feasibility.
arXiv Detail & Related papers (2025-05-01T12:50:02Z) - BoolQuestions: Does Dense Retrieval Understand Boolean Logic in Language? [88.29075896295357]
We first investigate whether current retrieval systems can comprehend the Boolean logic implied in language.
Through extensive experimental results, we draw the conclusion that current dense retrieval systems do not fully understand Boolean logic in language.
We propose a contrastive continual training method that serves as a strong baseline for the research community.
arXiv Detail & Related papers (2024-11-19T05:19:53Z) - Boolean-aware Boolean Circuit Classification: A Comprehensive Study on Graph Neural Network [2.1080766959962625]
The graph structure-based Boolean circuit classification can be grouped into the graph classification task.
We first define the proposed matching-equivalent class based on its Boolean-aware'' property.
We present a commonly study framework based on graph neural network(GNN) to analyze the key factors that can affect the Boolean-aware circuit classification.
arXiv Detail & Related papers (2024-11-13T08:38:21Z) - Boolean Logic as an Error feedback mechanism [0.5439020425819]
The notion of Boolean logic backpagation was introduced to build neural networks with weights and activations being Boolean numbers.
Most of computations can be done with logic instead of real arithmetic during training and phases.
arXiv Detail & Related papers (2024-01-29T18:56:21Z) - Empower Nested Boolean Logic via Self-Supervised Curriculum Learning [67.46052028752327]
We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested logic.
To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method textitCurriculum Logical Reasoning (textscClr)
arXiv Detail & Related papers (2023-10-09T06:54:02Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - RobustLR: Evaluating Robustness to Logical Perturbation in Deductive
Reasoning [25.319674132967553]
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language.
We propose RobustLR to evaluate the robustness of these models to minimal logical edits in rulebases.
We find that the models trained in prior works do not perform consistently on the different perturbations in RobustLR.
arXiv Detail & Related papers (2022-05-25T09:23:50Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - On Quantifying Literals in Boolean Logic and Its Applications to
Explainable AI [33.08556125025698]
We study the interplay between variable/literal and existential/universal quantification.
We identify some classes of Boolean formulas and circuits on which quantification can be done efficiently.
arXiv Detail & Related papers (2021-08-23T00:42:22Z) - Foundations of Reasoning with Uncertainty via Real-valued Logics [70.43924776071616]
We give a sound and strongly complete axiomatization that can be parametrized to cover essentially every real-valued logic.
Our class of sentences are very rich, and each describes a set of possible real values for a collection of formulas of the real-valued logic.
arXiv Detail & Related papers (2020-08-06T02:13:11Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.