Generating by Understanding: Neural Visual Generation with Logical
Symbol Groundings
- URL: http://arxiv.org/abs/2310.17451v2
- Date: Mon, 5 Feb 2024 02:50:49 GMT
- Title: Generating by Understanding: Neural Visual Generation with Logical
Symbol Groundings
- Authors: Yifei Peng, Yu Jin, Zhexu Luo, Yao-Xiang Ding, Wang-Zhou Dai, Zhong
Ren, Kun Zhou
- Abstract summary: We propose a neurosymbolic learning approach, Abductive visual Generation (AbdGen), for integrating logic programming systems with neural visual generative models.
Results show that compared to the baseline approaches, AbdGen requires significantly less labeled data for symbol assignment.
AbdGen can effectively learn underlying logical generative rules from data, which is out of the capability of existing approaches.
- Score: 26.134405924834525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the great success of neural visual generative models in recent years,
integrating them with strong symbolic reasoning systems remains a challenging
task. There are two levels of symbol grounding problems among the core
challenges: the first is symbol assignment, i.e. mapping latent factors of
neural visual generators to semantic-meaningful symbolic factors from the
reasoning systems by learning from limited labeled data. The second is rule
learning, i.e. learning new rules that govern the generative process to enhance
the symbolic reasoning systems. To deal with these two problems, we propose a
neurosymbolic learning approach, Abductive visual Generation (AbdGen), for
integrating logic programming systems with neural visual generative models
based on the abductive learning framework. To achieve reliable and efficient
symbol grounding, the quantized abduction method is introduced for generating
abduction proposals by the nearest-neighbor lookup within semantic codebooks.
To achieve precise rule learning, the contrastive meta-abduction method is
proposed to eliminate wrong rules with positive cases and avoid less
informative rules with negative cases simultaneously. Experimental results show
that compared to the baseline approaches, AbdGen requires significantly less
labeled data for symbol assignment. Furthermore, AbdGen can effectively learn
underlying logical generative rules from data, which is out of the capability
of existing approaches. The code is released at this link:
https://github.com/candytalking/AbdGen.
Related papers
- Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human
Activity Reasoning [58.5857133154749]
We propose a new symbolic system with broad-coverage symbols and rational rules.
We leverage the recent advancement of LLMs as an approximation of the two ideal properties.
Our method shows superiority in extensive activity understanding tasks.
arXiv Detail & Related papers (2023-11-29T05:27:14Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Enhancing Neural Mathematical Reasoning by Abductive Combination with
Symbolic Library [5.339286921277565]
This paper demonstrates that some abilities can be achieved through abductive combination with discrete systems that have been programmed with human knowledge.
On a mathematical reasoning dataset, we adopt the recently proposed abductive learning framework, and propose the ABL-Sym algorithm that combines the Transformer models with a symbolic mathematics library.
arXiv Detail & Related papers (2022-03-28T04:19:39Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Abductive Knowledge Induction From Raw Data [12.868722327487752]
We present Abductive Meta-Interpretive Learning ($Meta_Abd$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data.
Experimental results demonstrate that $Meta_Abd$ not only outperforms the compared systems in predictive accuracy and data efficiency.
arXiv Detail & Related papers (2020-10-07T16:33:28Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.