Abstracting Concept-Changing Rules for Solving Raven's Progressive
Matrix Problems
- URL: http://arxiv.org/abs/2307.07734v1
- Date: Sat, 15 Jul 2023 07:16:38 GMT
- Title: Abstracting Concept-Changing Rules for Solving Raven's Progressive
Matrix Problems
- Authors: Fan Shi, Bin Li, Xiangyang Xue
- Abstract summary: Raven's Progressive Matrix (RPM) is a classic test to realize such ability in machine intelligence by selecting from candidates.
Recent studies suggest that solving RPM in an answer-generation way boosts a more in-depth understanding of rules.
We propose a deep latent variable model for Concept-changing Rule ABstraction (CRAB) by learning interpretable concepts and parsing concept-changing rules in the latent space.
- Score: 54.26307134687171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The abstract visual reasoning ability in human intelligence benefits
discovering underlying rules in the novel environment. Raven's Progressive
Matrix (RPM) is a classic test to realize such ability in machine intelligence
by selecting from candidates. Recent studies suggest that solving RPM in an
answer-generation way boosts a more in-depth understanding of rules. However,
existing generative solvers cannot discover the global concept-changing rules
without auxiliary supervision (e.g., rule annotations and distractors in
candidate sets). To this end, we propose a deep latent variable model for
Concept-changing Rule ABstraction (CRAB) by learning interpretable concepts and
parsing concept-changing rules in the latent space. With the iterative learning
process, CRAB can automatically abstract global rules shared on the dataset on
each concept and form the learnable prior knowledge of global rules. CRAB
outperforms the baselines trained without auxiliary supervision in the
arbitrary-position answer generation task and achieves comparable and even
higher accuracy than the compared models trained with auxiliary supervision.
Finally, we conduct experiments to illustrate the interpretability of CRAB in
concept learning, answer selection, and global rule abstraction.
Related papers
- Incorporating Expert Rules into Neural Networks in the Framework of
Concept-Based Learning [2.9370710299422598]
It is proposed how to combine logical rules and neural networks predicting the concept probabilities.
We provide several approaches for solving the stated problem and for training neural networks.
The code of proposed algorithms is publicly available.
arXiv Detail & Related papers (2024-02-22T17:33:49Z) - Towards Generative Abstract Reasoning: Completing Raven's Progressive Matrix via Rule Abstraction and Selection [52.107043437362556]
Raven's Progressive Matrix (RPM) is widely used to probe abstract visual reasoning in machine intelligence.
Participators of RPM tests can show powerful reasoning ability by inferring and combining attribute-changing rules.
We propose a deep latent variable model for answer generation problems through Rule AbstractIon and SElection.
arXiv Detail & Related papers (2024-01-18T13:28:44Z) - Interpretable Neural-Symbolic Concept Reasoning [7.1904050674791185]
Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts.
We propose the Deep Concept Reasoner (DCR), the first interpretable concept-based model that builds upon concept embeddings.
arXiv Detail & Related papers (2023-04-27T09:58:15Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - DAReN: A Collaborative Approach Towards Reasoning And Disentangling [27.50150027974947]
We propose an end-to-end joint representation-reasoning learning framework, which leverages a weak form of inductive bias to improve both tasks together.
We accomplish this using a novel learning framework Disentangling based Abstract Reasoning Network (DAReN) based on the principles of GM-RPM.
arXiv Detail & Related papers (2021-09-27T16:10:30Z) - Unsupervised Discriminative Embedding for Sub-Action Learning in Complex
Activities [54.615003524001686]
This paper proposes a novel approach for unsupervised sub-action learning in complex activities.
The proposed method maps both visual and temporal representations to a latent space where the sub-actions are learnt discriminatively.
We show that the proposed combination of visual-temporal embedding and discriminative latent concepts allow to learn robust action representations in an unsupervised setting.
arXiv Detail & Related papers (2021-04-30T20:07:27Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Raven's Progressive Matrices Completion with Latent Gaussian Process
Priors [42.310737373877714]
Raven's Progressive Matrices (RPM) are widely used in human IQ tests.
We propose a deep latent variable model, in which multiple Gaussian processes are employed as priors of latent variables.
We evaluate the proposed model on RPM-like datasets with multiple continuously-changing visual concepts.
arXiv Detail & Related papers (2021-03-22T17:48:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.