Pre-Training Meta-Rule Selection Policy for Visual Generative Abductive Learning
- URL: http://arxiv.org/abs/2503.06427v1
- Date: Sun, 09 Mar 2025 03:41:11 GMT
- Title: Pre-Training Meta-Rule Selection Policy for Visual Generative Abductive Learning
- Authors: Yu Jin, Jingming Liu, Zhexu Luo, Yifei Peng, Ziang Qin, Wang-Zhou Dai, Yao-Xiang Ding, Kun Zhou,
- Abstract summary: We propose a pre-training method for obtaining meta-rule selection policy for visual generative learning approach AbdGen.<n>The pre-training process is done on pure symbol data, not involving symbol grounding learning of raw visual inputs.<n>Our method is able to effectively address the meta-rule selection problem for visual abduction, boosting the efficiency of visual generative abductive learning.
- Score: 24.92602845948049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual generative abductive learning studies jointly training symbol-grounded neural visual generator and inducing logic rules from data, such that after learning, the visual generation process is guided by the induced logic rules. A major challenge for this task is to reduce the time cost of logic abduction during learning, an essential step when the logic symbol set is large and the logic rule to induce is complicated. To address this challenge, we propose a pre-training method for obtaining meta-rule selection policy for the recently proposed visual generative learning approach AbdGen [Peng et al., 2023], aiming at significantly reducing the candidate meta-rule set and pruning the search space. The selection model is built based on the embedding representation of both symbol grounding of cases and meta-rules, which can be effectively integrated with both neural model and logic reasoning system. The pre-training process is done on pure symbol data, not involving symbol grounding learning of raw visual inputs, making the entire learning process low-cost. An additional interesting observation is that the selection policy can rectify symbol grounding errors unseen during pre-training, which is resulted from the memorization ability of attention mechanism and the relative stability of symbolic patterns. Experimental results show that our method is able to effectively address the meta-rule selection problem for visual abduction, boosting the efficiency of visual generative abductive learning. Code is available at https://github.com/future-item/metarule-select.
Related papers
- Differentiable Logic Programming for Distant Supervision [4.820391833117535]
We introduce a new method for integrating neural networks with logic programming in Neural-Symbolic AI (NeSy)
Unlike prior methods, our approach does not depend on symbolic solvers for reasoning about missing labels.
This method facilitates more efficient learning under distant supervision.
arXiv Detail & Related papers (2024-08-22T17:55:52Z) - Generating by Understanding: Neural Visual Generation with Logical
Symbol Groundings [26.134405924834525]
We propose a neurosymbolic learning approach, Abductive visual Generation (AbdGen), for integrating logic programming systems with neural visual generative models.
Results show that compared to the baseline approaches, AbdGen requires significantly less labeled data for symbol assignment.
AbdGen can effectively learn underlying logical generative rules from data, which is out of the capability of existing approaches.
arXiv Detail & Related papers (2023-10-26T15:00:21Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning [63.50909998372667]
We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
arXiv Detail & Related papers (2022-03-01T11:13:00Z) - Improving exploration in policy gradient search: Application to symbolic
optimization [6.344988093245026]
Many machine learning strategies leverage neural networks to search large spaces of mathematical symbols.
In contrast to traditional evolutionary approaches, using a neural network at the core of the search allows learning higher-level symbolic patterns.
We show that these techniques can improve the performance, increase sample efficiency, and lower the complexity of solutions for the task of symbolic regression.
arXiv Detail & Related papers (2021-07-19T21:11:07Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Abductive Knowledge Induction From Raw Data [12.868722327487752]
We present Abductive Meta-Interpretive Learning ($Meta_Abd$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data.
Experimental results demonstrate that $Meta_Abd$ not only outperforms the compared systems in predictive accuracy and data efficiency.
arXiv Detail & Related papers (2020-10-07T16:33:28Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.