Injecting Logical Constraints into Neural Networks via Straight-Through
Estimators
- URL: http://arxiv.org/abs/2307.04347v1
- Date: Mon, 10 Jul 2023 05:12:05 GMT
- Title: Injecting Logical Constraints into Neural Networks via Straight-Through
Estimators
- Authors: Zhun Yang, Joohyung Lee, Chiyoun Park
- Abstract summary: Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI.
We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning.
- Score: 5.6613898352023515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Injecting discrete logical constraints into neural network learning is one of
the main challenges in neuro-symbolic AI. We find that a
straight-through-estimator, a method introduced to train binary neural
networks, could effectively be applied to incorporate logical constraints into
neural network learning. More specifically, we design a systematic way to
represent discrete logical constraints as a loss function; minimizing this loss
using gradient descent via a straight-through-estimator updates the neural
network's weights in the direction that the binarized outputs satisfy the
logical constraints. The experimental results show that by leveraging GPUs and
batch training, this method scales significantly better than existing
neuro-symbolic methods that require heavy symbolic computation for computing
gradients. Also, we demonstrate that our method applies to different types of
neural networks, such as MLP, CNN, and GNN, making them learn with no or fewer
labeled data by learning directly from known constraints.
Related papers
- Randomized Forward Mode Gradient for Spiking Neural Networks in Scientific Machine Learning [4.178826560825283]
Spiking neural networks (SNNs) represent a promising approach in machine learning, combining the hierarchical learning capabilities of deep neural networks with the energy efficiency of spike-based computations.
Traditional end-to-end training of SNNs is often based on back-propagation, where weight updates are derived from gradients computed through the chain rule.
This method encounters challenges due to its limited biological plausibility and inefficiencies on neuromorphic hardware.
In this study, we introduce an alternative training approach for SNNs. Instead of using back-propagation, we leverage weight perturbation methods within a forward-mode
arXiv Detail & Related papers (2024-11-11T15:20:54Z) - Neuro-symbolic Learning Yielding Logical Constraints [22.649543443988712]
end-to-end learning of neuro-symbolic systems is still an unsolved challenge.
We propose a framework that fuses the network, symbol grounding, and logical constraint synthesisto-end learning process.
arXiv Detail & Related papers (2024-10-28T12:18:25Z) - Differentiable Logic Programming for Distant Supervision [4.820391833117535]
We introduce a new method for integrating neural networks with logic programming in Neural-Symbolic AI (NeSy)
Unlike prior methods, our approach does not depend on symbolic solvers for reasoning about missing labels.
This method facilitates more efficient learning under distant supervision.
arXiv Detail & Related papers (2024-08-22T17:55:52Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Brain-Inspired Learning on Neuromorphic Substrates [5.279475826661643]
This article provides a mathematical framework for the design of practical online learning algorithms for neuromorphic substrates.
Specifically, we show a direct connection between Real-Time Recurrent Learning (RTRL) and biologically plausible learning rules for training Spiking Neural Networks (SNNs)
We motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity.
arXiv Detail & Related papers (2020-10-22T17:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.