A new perspective of paramodulation complexity by solving massive 8
puzzles
- URL: http://arxiv.org/abs/2012.08231v1
- Date: Tue, 15 Dec 2020 11:47:47 GMT
- Title: A new perspective of paramodulation complexity by solving massive 8
puzzles
- Authors: Ruo Ando, Yoshiyasu Takefuji
- Abstract summary: A sliding puzzle is a combination puzzle where a player slide pieces along certain routes on a board to reach a certain end-configuration.
It turned out that by counting the number of clauses yielded with paramodulation, we can evaluate the difficulty of each puzzle.
- Score: 0.4514386953429769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A sliding puzzle is a combination puzzle where a player slide pieces along
certain routes on a board to reach a certain end-configuration. In this paper,
we propose a novel measurement of complexity of massive sliding puzzles with
paramodulation which is an inference method of automated reasoning. It turned
out that by counting the number of clauses yielded with paramodulation, we can
evaluate the difficulty of each puzzle. In experiment, we have generated 100 *
8 puzzles which passed the solvability checking by countering inversions. By
doing this, we can distinguish the complexity of 8 puzzles with the number of
generated with paramodulation. For example, board [2,3,6,1,7,8,5,4, hole] is
the easiest with score 3008 and board [6,5,8,7,4,3,2,1, hole] is the most
difficult with score 48653. Besides, we have succeeded to obverse several
layers of complexity (the number of clauses generated) in 100 puzzles. We can
conclude that proposal method can provide a new perspective of paramodulation
complexity concerning sliding block puzzles.
Related papers
- On Memorization of Large Language Models in Logical Reasoning [70.94164038947078]
Large language models (LLMs) achieve good performance on challenging reasoning benchmarks, yet could also make basic reasoning mistakes.
One hypothesis is that the increasingly high and nearly saturated performance could be due to the memorization of similar problems.
We show that fine-tuning leads to heavy memorization, but it also consistently improves generalization performance.
arXiv Detail & Related papers (2024-10-30T15:31:54Z) - Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious
Challenges in Multimodal Reasoning [24.386388107656334]
This paper introduces the novel task of multimodal puzzle solving, framed within the context of visual question-answering.
We present a new dataset, AlgoVQA, designed to challenge and evaluate the capabilities of multimodal language models in solving algorithmic puzzles.
arXiv Detail & Related papers (2024-03-06T17:15:04Z) - Solving Witness-type Triangle Puzzles Faster with an Automatically
Learned Human-Explainable Predicate [0.29005223064604074]
We develop a search-based artificial intelligence puzzle solver for The Witness game.
We learn a human-explainable predicate that predicts whether a partial path to a Witness-type puzzle is not completable to a solution path.
We prove a key property of the learned predicate which allows us to use it for pruning successor states in search.
arXiv Detail & Related papers (2023-08-04T18:52:18Z) - Multi-Phase Relaxation Labeling for Square Jigsaw Puzzle Solving [73.58829980121767]
We present a novel method for solving square jigsaw puzzles based on global optimization.
The method is fully automatic, assumes no prior information, and can handle puzzles with known or unknown piece orientation.
arXiv Detail & Related papers (2023-03-26T18:53:51Z) - Automated Graph Genetic Algorithm based Puzzle Validation for Faster
Game Desig [69.02688684221265]
This paper presents an evolutionary algorithm, empowered by expert-knowledge informeds, for solving logical puzzles in video games efficiently.
We discuss multiple variations of hybrid genetic approaches for constraint satisfaction problems that allow us to find a diverse set of near-optimal solutions for puzzles.
arXiv Detail & Related papers (2023-02-17T18:15:33Z) - Complexity-Based Prompting for Multi-Step Reasoning [72.0057198610614]
We study the task of prompting large-scale language models to perform multi-step reasoning.
A central question is which reasoning examples make the most effective prompts.
We propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning.
arXiv Detail & Related papers (2022-10-03T05:33:27Z) - Using Small MUSes to Explain How to Solve Pen and Paper Puzzles [4.535832029902474]
We present Demystify, a tool which allows puzzles to be expressed in a high-level constraint programming language.
We give several improvements to the existing techniques for solving puzzles with MUSes.
We demonstrate the effectiveness and generality of Demystify by comparing its results to documented strategies for solving a range of pen and paper puzzles by hand.
arXiv Detail & Related papers (2021-04-30T15:07:51Z) - Non-Rigid Puzzles [50.213265511586535]
We present a non-rigid multi-part shape matching algorithm.
We assume to be given a reference shape and its multiple parts undergoing a non-rigid deformation.
Experimental results on synthetic as well as real scans demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-11-26T00:32:30Z) - Pictorial and apictorial polygonal jigsaw puzzles: The lazy caterer
model, properties, and solvers [14.08706290287121]
We formalize a new type of jigsaw puzzle where the pieces are general convex polygons generated by cutting through a global polygonal shape/image with an arbitrary number of straight cuts.
We analyze the theoretical properties of such puzzles, including the inherent challenges in solving them once pieces are contaminated with geometrical noise.
arXiv Detail & Related papers (2020-08-17T22:07:40Z) - PuzzLing Machines: A Challenge on Learning From Small Data [64.513459448362]
We introduce a challenge on learning from small data, PuzzLing Machines, which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students.
Our challenge contains around 100 puzzles covering a wide range of linguistic phenomena from 81 languages.
We show that both simple statistical algorithms and state-of-the-art deep neural models perform inadequately on this challenge, as expected.
arXiv Detail & Related papers (2020-04-27T20:34:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.