Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
- URL: http://arxiv.org/abs/2412.08457v2
- Date: Sat, 08 Feb 2025 01:31:44 GMT
- Title: Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
- Authors: Wen-Chao Hu, Wang-Zhou Dai, Yuan Jiang, Zhi-Hua Zhou,
- Abstract summary: Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition.
We propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework.
- Score: 53.82376573677766
- License:
- Abstract: Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition, modeling the intuitive System 1 with neural networks and the algorithmic System 2 with symbolic reasoning. However, for complex learning targets, NeSy systems often generate outputs inconsistent with domain knowledge and it is challenging to rectify them. Inspired by the human Cognitive Reflection, which promptly detects errors in our intuitive response and revises them by invoking the System 2 reasoning, we propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework. ABL-Refl leverages domain knowledge to abduce a reflection vector during training, which can then flag potential errors in the neural network outputs and invoke abduction to rectify them and generate consistent outputs during inference. ABL-Refl is highly efficient in contrast to previous ABL implementations. Experiments show that ABL-Refl outperforms state-of-the-art NeSy methods, achieving excellent accuracy with fewer training resources and enhanced efficiency.
Related papers
- Efficient Diffusion as Low Light Enhancer [63.789138528062225]
Reflectance-Aware Trajectory Refinement (RATR) is a simple yet effective module to refine the teacher trajectory using the reflectance component of images.
textbfReflectance-aware textbfDiffusion with textbfDistilled textbfTrajectory (textbfReDDiT) is an efficient and flexible distillation framework tailored for Low-Light Image Enhancement (LLIE)
arXiv Detail & Related papers (2024-10-16T08:07:18Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - Neuro-symbolic model for cantilever beams damage detection [0.0]
We propose a neuro-symbolic model for the detection of damages in cantilever beams based on a novel cognitive architecture.
The hybrid discriminative model is introduced under the name Logic Convolutional Neural Regressor.
arXiv Detail & Related papers (2023-05-04T13:12:39Z) - Efficient Fraud Detection Using Deep Boosting Decision Trees [8.941773715949697]
Fraud detection is to identify, monitor, and prevent potentially fraudulent activities from complex data.
Recent development and success in AI, especially machine learning, provides a new data-driven way to deal with fraud.
Deep boosting decision trees (DBDT) is a novel approach for fraud detection based on gradient boosting and neural networks.
arXiv Detail & Related papers (2023-02-12T14:02:58Z) - Neuro-symbolic Explainable Artificial Intelligence Twin for Zero-touch
IoE in Wireless Network [61.90504487270785]
Explainable artificial intelligence (XAI) twin systems will be a fundamental enabler of zero-touch network and service management (ZSM)
A reliable XAI twin system for ZSM requires two composites: an extreme analytical ability for discretizing the physical behavior of the Internet of Everything (IoE) and rigorous methods for characterizing the reasoning of such behavior.
A novel neuro-symbolic explainable artificial intelligence twin framework is proposed to enable trustworthy ZSM for a wireless IoE.
arXiv Detail & Related papers (2022-10-13T01:08:06Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Neural Born Iteration Method For Solving Inverse Scattering Problems: 2D
Cases [3.795881624409311]
We propose the neural Born iterative method (Neural BIM) for solving 2D inverse scattering problems (ISPs)
Neural BIM employs independent convolutional neural networks (CNNs) to learn the alternate update rules of two different candidate solutions regarding the residuals.
Two different schemes are presented in this paper, including the supervised and unsupervised learning schemes.
arXiv Detail & Related papers (2021-12-18T03:22:41Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Feature Purification: How Adversarial Training Performs Robust Deep
Learning [66.05472746340142]
We show a principle that we call Feature Purification, where we show one of the causes of the existence of adversarial examples is the accumulation of certain small dense mixtures in the hidden weights during the training process of a neural network.
We present both experiments on the CIFAR-10 dataset to illustrate this principle, and a theoretical result proving that for certain natural classification tasks, training a two-layer neural network with ReLU activation using randomly gradient descent indeed this principle.
arXiv Detail & Related papers (2020-05-20T16:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.