Neuro-Symbolic Sudoku Solver
- URL: http://arxiv.org/abs/2307.00653v1
- Date: Sun, 2 Jul 2023 20:04:01 GMT
- Title: Neuro-Symbolic Sudoku Solver
- Authors: Ashutosh Hathidara, Lalit Pandey
- Abstract summary: We extend the functionality of the Neuro Logic Machine (NLM) to solve a 9X9 game of Sudoku.
In our study, we showcase an NLM which is capable of obtaining 100% accuracy for solving a Sudoku with empty cells ranging from 3 to 10.
We analyze the behaviour of NLMs with a backtracking algorithm by comparing the convergence time using a graph plot on the same problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks have achieved great success in some of the complex tasks
that humans can do with ease. These include image recognition/classification,
natural language processing, game playing etc. However, modern Neural Networks
fail or perform poorly when trained on tasks that can be solved easily using
backtracking and traditional algorithms. Therefore, we use the architecture of
the Neuro Logic Machine (NLM) and extend its functionality to solve a 9X9 game
of Sudoku. To expand the application of NLMs, we generate a random grid of
cells from a dataset of solved games and assign up to 10 new empty cells. The
goal of the game is then to find a target value ranging from 1 to 9 and fill in
the remaining empty cells while maintaining a valid configuration. In our
study, we showcase an NLM which is capable of obtaining 100% accuracy for
solving a Sudoku with empty cells ranging from 3 to 10. The purpose of this
study is to demonstrate that NLMs can also be used for solving complex problems
and games like Sudoku. We also analyze the behaviour of NLMs with a
backtracking algorithm by comparing the convergence time using a graph plot on
the same problem. With this study we show that Neural Logic Machines can be
trained on the tasks that traditional Deep Learning architectures fail using
Reinforcement Learning. We also aim to propose the importance of symbolic
learning in explaining the systematicity in the hybrid model of NLMs.
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER) [0.13053649021965597]
This paper introduces an approach designed to improve the performance of neural models in learning reasoning tasks.
It achieves this by integrating Answer Set Programming solvers and domain-specific expertise.
The model shows a significant improvement in solving Sudoku puzzles using only 12 puzzles for training and testing.
arXiv Detail & Related papers (2023-12-18T19:06:00Z) - Dynamic Analysis and an Eigen Initializer for Recurrent Neural Networks [0.0]
We study the dynamics of the hidden state in recurrent neural networks.
We propose a new perspective to analyze the hidden state space based on an eigen decomposition of the weight matrix.
We provide an explanation for long-term dependency based on the eigen analysis.
arXiv Detail & Related papers (2023-07-28T17:14:58Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Pathfinding Neural Cellular Automata [23.831530224401575]
Pathfinding is an important sub-component of a broad range of complex AI tasks, such as robot path planning, transport routing, and game playing.
We hand-code and learn models for Breadth-First Search (BFS), i.e. shortest path finding.
We present a neural implementation of Depth-First Search (DFS), and outline how it can be combined with neural BFS to produce an NCA for computing diameter of a graph.
We experiment with architectural modifications inspired by these hand-coded NCAs, training networks from scratch to solve the diameter problem on grid mazes while exhibiting strong ability generalization
arXiv Detail & Related papers (2023-01-17T11:45:51Z) - Are Deep Neural Networks SMARTer than Second Graders? [85.60342335636341]
We evaluate the abstraction, deduction, and generalization abilities of neural networks in solving visuo-linguistic puzzles designed for children in the 6--8 age group.
Our dataset consists of 101 unique puzzles; each puzzle comprises a picture question, and their solution needs a mix of several elementary skills, including arithmetic, algebra, and spatial reasoning.
Experiments reveal that while powerful deep models offer reasonable performances on puzzles in a supervised setting, they are not better than random accuracy when analyzed for generalization.
arXiv Detail & Related papers (2022-12-20T04:33:32Z) - Visualizing Deep Neural Networks with Topographic Activation Maps [1.1470070927586014]
We introduce and compare methods to obtain a topographic layout of neurons in a Deep Neural Network layer.
We demonstrate how to use topographic activation maps to identify errors or encoded biases and to visualize training processes.
arXiv Detail & Related papers (2022-04-07T15:56:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.