Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings
- URL: http://arxiv.org/abs/2310.17451v4
- Date: Thu, 05 Jun 2025 03:24:20 GMT
- Title: Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings
- Authors: Yifei Peng, Zijie Zha, Yu Jin, Zhexu Luo, Wang-Zhou Dai, Zhong Ren, Yao-Xiang Ding, Kun Zhou,
- Abstract summary: We propose the Abductive visual Generation (AbdGen) approach to build such logic-integrated models.<n>We experimentally show that our approach can be utilized to integrate various neural generative models with logical reasoning systems.
- Score: 23.85885099230917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Making neural visual generative models controllable by logical reasoning systems is promising for improving faithfulness, transparency, and generalizability. We propose the Abductive visual Generation (AbdGen) approach to build such logic-integrated models. A vector-quantized symbol grounding mechanism and the corresponding disentanglement training method are introduced to enhance the controllability of logical symbols over generation. Furthermore, we propose two logical abduction methods to make our approach require few labeled training data and support the induction of latent logical generative rules from data. We experimentally show that our approach can be utilized to integrate various neural generative models with logical reasoning systems, by both learning from scratch or utilizing pre-trained models directly. The code is released at https://github.com/future-item/AbdGen.
Related papers
- To Neuro-Symbolic Classification and Beyond by Compiling Description Logic Ontologies to Probabilistic Circuits [13.179785809195955]
We develop a neuro-symbolic method that reliably outputs predictions consistent with a Description Logic ontology.<n>We show that our neuro-symbolic classifiers reliably produce consistent predictions when compared to neural network baselines.
arXiv Detail & Related papers (2026-01-21T11:30:14Z) - Meta-Representational Predictive Coding: Biomimetic Self-Supervised Learning [51.22185316175418]
We present a new form of predictive coding that we call meta-representational predictive coding (MPC)<n>MPC sidesteps the need for learning a generative model of sensory input by learning to predict representations of sensory input across parallel streams.
arXiv Detail & Related papers (2025-03-22T22:13:14Z) - Pre-Training Meta-Rule Selection Policy for Visual Generative Abductive Learning [24.92602845948049]
We propose a pre-training method for obtaining meta-rule selection policy for visual generative learning approach AbdGen.
The pre-training process is done on pure symbol data, not involving symbol grounding learning of raw visual inputs.
Our method is able to effectively address the meta-rule selection problem for visual abduction, boosting the efficiency of visual generative abductive learning.
arXiv Detail & Related papers (2025-03-09T03:41:11Z) - Distilling Symbolic Priors for Concept Learning into Neural Networks [9.915299875869046]
We show that inductive biases can be instantiated in artificial neural networks by distilling a prior distribution from a symbolic Bayesian model via meta-learning.
We use this approach to create a neural network with an inductive bias towards concepts expressed as short logical formulas.
arXiv Detail & Related papers (2024-02-10T20:06:26Z) - Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human
Activity Reasoning [58.5857133154749]
We propose a new symbolic system with broad-coverage symbols and rational rules.
We leverage the recent advancement of LLMs as an approximation of the two ideal properties.
Our method shows superiority in extensive activity understanding tasks.
arXiv Detail & Related papers (2023-11-29T05:27:14Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Injecting Logical Constraints into Neural Networks via Straight-Through
Estimators [5.6613898352023515]
Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI.
We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning.
arXiv Detail & Related papers (2023-07-10T05:12:05Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z) - Enhancing Neural Mathematical Reasoning by Abductive Combination with
Symbolic Library [5.339286921277565]
This paper demonstrates that some abilities can be achieved through abductive combination with discrete systems that have been programmed with human knowledge.
On a mathematical reasoning dataset, we adopt the recently proposed abductive learning framework, and propose the ABL-Sym algorithm that combines the Transformer models with a symbolic mathematics library.
arXiv Detail & Related papers (2022-03-28T04:19:39Z) - Neural-Symbolic Integration for Interactive Learning and Conceptual
Grounding [1.14219428942199]
We propose neural-symbolic integration for abstract concept explanation and interactive learning.
Interaction with the user confirms or rejects a revision of the neural model.
The approach is illustrated using the Logic Network framework alongside Concept Activation Vectors and applied to a Conal Neural Network.
arXiv Detail & Related papers (2021-12-22T11:24:48Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Abductive Knowledge Induction From Raw Data [12.868722327487752]
We present Abductive Meta-Interpretive Learning ($Meta_Abd$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data.
Experimental results demonstrate that $Meta_Abd$ not only outperforms the compared systems in predictive accuracy and data efficiency.
arXiv Detail & Related papers (2020-10-07T16:33:28Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.