Abstract Visual Reasoning: An Algebraic Approach for Solving Raven's
Progressive Matrices
- URL: http://arxiv.org/abs/2303.11730v1
- Date: Tue, 21 Mar 2023 10:34:39 GMT
- Title: Abstract Visual Reasoning: An Algebraic Approach for Solving Raven's
Progressive Matrices
- Authors: Jingyi Xu, Tushar Vaidya, Yufei Wu, Saket Chandra, Zhangsheng Lai, Kai
Fong Ernest Chong
- Abstract summary: We introduce algebraic machine reasoning, a new reasoning framework that is well-suited for abstract reasoning.
Our framework is able to select the correct answer from a given answer set, and also able to generate the correct answer with only the question matrix given.
Experiments on the I-RAVEN dataset yield an overall $93.2%$ accuracy, which significantly outperforms the current state-of-the-art accuracy of $77.0%$ and exceeds human performance at $84.4%$ accuracy.
- Score: 9.092255360641163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce algebraic machine reasoning, a new reasoning framework that is
well-suited for abstract reasoning. Effectively, algebraic machine reasoning
reduces the difficult process of novel problem-solving to routine algebraic
computation. The fundamental algebraic objects of interest are the ideals of
some suitably initialized polynomial ring. We shall explain how solving Raven's
Progressive Matrices (RPMs) can be realized as computational problems in
algebra, which combine various well-known algebraic subroutines that include:
Computing the Gr\"obner basis of an ideal, checking for ideal containment, etc.
Crucially, the additional algebraic structure satisfied by ideals allows for
more operations on ideals beyond set-theoretic operations.
Our algebraic machine reasoning framework is not only able to select the
correct answer from a given answer set, but also able to generate the correct
answer with only the question matrix given. Experiments on the I-RAVEN dataset
yield an overall $93.2\%$ accuracy, which significantly outperforms the current
state-of-the-art accuracy of $77.0\%$ and exceeds human performance at $84.4\%$
accuracy.
Related papers
- Alpay Algebra: A Universal Structural Foundation [0.0]
Alpay Algebra is introduced as a universal, category-theoretic framework.<n>It unifies classical algebraic structures with modern needs in symbolic recursion and explainable AI.
arXiv Detail & Related papers (2025-05-21T10:18:49Z) - Thinkless: LLM Learns When to Think [57.857534644932194]
Reasoning Language Models, capable of extended chain-of-thought reasoning, have demonstrated remarkable performance on tasks requiring complex logical inference.<n>We propose Thinkless, a learnable framework that empowers an LLM to adaptively select between short-form and long-form reasoning.<n>On several benchmarks such as Minerva Algebra, MATH-500, and GSM8K, Thinkless is able to reduce the usage of long-chain thinking by 50% - 90%.
arXiv Detail & Related papers (2025-05-19T17:24:16Z) - Irreducible matrix representations for the walled Brauer algebra [0.9374652839580183]
This paper investigates the representation theory of the algebra of partially transposed permutation operators, $mathcalAd_p,p$.
It provides a matrix representation for the abstract walled Brauer algebra.
This algebra has recently gained significant attention due to its relevance in quantum information theory.
arXiv Detail & Related papers (2025-01-22T18:22:20Z) - MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time [51.5039731721706]
MindStar is a purely inference-based searching method for large language models.
It formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths.
It significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1.
arXiv Detail & Related papers (2024-05-25T15:07:33Z) - CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Improved Algorithms for Allen's Interval Algebra by Dynamic Programming
with Sublinear Partitioning [9.594432031144716]
Allen's interval algebra is one of the most well-known calculi in qualitative temporal reasoning.
We propose a novel framework for solving NP-hard qualitative reasoning problems.
We obtain a major improvement of $O*((fraccnlogn)n)$ for Allen's interval algebra.
arXiv Detail & Related papers (2023-05-25T11:45:12Z) - Matrix diagonalization and singular value decomposition: Static SageMath
and dynamic ChatGPT juxtaposed [0.0]
In particular, we focused on (orthogonal) diagonalization and singular value decomposition (SVD)
We also offered the possibility of exploring these topics using SageMath, a Python-based free open software computer algebra system (CAS)
By consolidating essential concepts in linear algebra and improving computational skills through effective practice, mastering these topics would become easier and mistakes could be minimized.
arXiv Detail & Related papers (2023-03-30T05:51:27Z) - Learning Algebraic Representation for Systematic Generalization in
Abstract Reasoning [109.21780441933164]
We propose a hybrid approach to improve systematic generalization in reasoning.
We showcase a prototype with algebraic representation for the abstract spatial-temporal task of Raven's Progressive Matrices (RPM)
We show that the algebraic representation learned can be decoded by isomorphism to generate an answer.
arXiv Detail & Related papers (2021-11-25T09:56:30Z) - Efficient and Optimal Algorithms for Contextual Dueling Bandits under
Realizability [59.81339109121384]
We study the $K$ contextual dueling bandit problem, a sequential decision making setting in which the learner uses contextual information to make two decisions, but only observes emphpreference-based feedback suggesting that one decision was better than the other.
We provide a new algorithm that achieves the optimal regret rate for a new notion of best response regret, which is a strictly stronger performance measure than those considered in prior works.
arXiv Detail & Related papers (2021-11-24T07:14:57Z) - Learning Algebraic Recombination for Compositional Generalization [71.78771157219428]
We propose LeAR, an end-to-end neural model to learn algebraic recombination for compositional generalization.
Key insight is to model the semantic parsing task as a homomorphism between a latent syntactic algebra and a semantic algebra.
Experiments on two realistic and comprehensive compositional generalization demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2021-07-14T07:23:46Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Algebras of Sets and Coherent Sets of Gambles [1.697342683039794]
We show how to construct an information algebra of coherent sets of gambles defined on general possibility spaces.
This paper also details how propositional logic is naturally embedded into the theory of imprecise probabilities.
arXiv Detail & Related papers (2021-05-27T08:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.