A Machine Learning Approach That Beats Large Rubik's Cubes
- URL: http://arxiv.org/abs/2502.13266v1
- Date: Tue, 18 Feb 2025 20:22:38 GMT
- Title: A Machine Learning Approach That Beats Large Rubik's Cubes
- Authors: Alexander Chervov, Kirill Khoruzhii, Nikita Bukhal, Jalal Naghiyev, Vladislav Zamkovoy, Ivan Koltsov, Lyudmila Cheldieva, Arsenii Sychev, Arsenii Lenin, Mark Obozov, Egor Urvanov, Alexey Romanov,
- Abstract summary: The paper proposes a novel machine learning-based approach to the pathfinding problem on extremely large graphs.
We demonstrate its efficiency by finding solutions for 4x4x4 and 5x5x5 Rubik's cubes with unprecedentedly short solution lengths.
- Score: 32.8176720435354
- License:
- Abstract: The paper proposes a novel machine learning-based approach to the pathfinding problem on extremely large graphs. This method leverages diffusion distance estimation via a neural network and uses beam search for pathfinding. We demonstrate its efficiency by finding solutions for 4x4x4 and 5x5x5 Rubik's cubes with unprecedentedly short solution lengths, outperforming all available solvers and introducing the first machine learning solver beyond the 3x3x3 case. In particular, it surpasses every single case of the combined best results in the Kaggle Santa 2023 challenge, which involved over 1,000 teams. For the 3x3x3 Rubik's cube, our approach achieves an optimality rate exceeding 98%, matching the performance of task-specific solvers and significantly outperforming prior solutions such as DeepCubeA (60.3%) and EfficientCube (69.6%). Additionally, our solution is more than 26 times faster in solving 3x3x3 Rubik's cubes while requiring up to 18.5 times less model training time than the most efficient state-of-the-art competitor.
Related papers
- Node Classification and Search on the Rubik's Cube Graph with GNNs [55.2480439325792]
This study focuses on the application of deep geometric models to solve the 3x3x3 Rubik's Rubik.
We begin by discussing the cube's graph representation and defining distance as the model's optimization objective.
The distance approximation task is reformulated as a node classification problem, effectively addressed using Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2025-01-30T18:52:43Z) - Solving Rubik's Cube Without Tricky Sampling [0.6445605125467574]
The Rubiks Cube, with its vast state space and sparse reward structure, presents a significant challenge for reinforcement learning.
Previous research addressed this by propagating cost-to-go estimates from the solved state and incorporating search techniques.
We introduce a novel RL algorithm using policy gradient methods to solve the Rubiks Cube without relying on near solved-state sampling.
arXiv Detail & Related papers (2024-11-29T09:56:40Z) - Solving a Rubik's Cube Using its Local Graph Structure [13.219469732742354]
A Rubix Cube has six faces and twelve possible actions, leading to a small and unconstrained action space.
A Rubix Cube can be represented as a graph, where states of the cube are nodes and actions are edges.
Drawing on graph convolutional networks, we design a new search algorithm to find the solution to a scrambled Rubix Cube.
arXiv Detail & Related papers (2024-08-15T05:39:52Z) - On Solving the Rubik's Cube with Domain-Independent Planners Using
Standard Representations [7.470087627607195]
We present the first Rubik's Cube representation in the popular PDDL language.
We find that in one comparable experiment, DeepCubeA solves all problems with varying complexities, albeit only 78.5% are optimal plans.
Our study provides valuable insights into the trade-offs between representational choice and plan optimality.
arXiv Detail & Related papers (2023-07-25T14:52:23Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Yordle: An Efficient Imitation Learning for Branch and Bound [1.6758573326215689]
This work presents our solution and insights gained by team qqy in the 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition.
Our solution is a highly efficient imitation learning framework for performance improvement of Branch and Bound (B&B), named Yordle.
In our experiments, Yordle greatly outperforms the baseline algorithm adopted by the competition while requiring significantly less time and amounts of data to train the decision model.
arXiv Detail & Related papers (2022-02-02T14:46:30Z) - Self-Supervision is All You Need for Solving Rubik's Cube [0.0]
This work introduces a simple and efficient deep learning method for solving problems with a predefined goal, represented by Rubik's Cube.
We demonstrate that, for such problems, training a deep neural network on random scrambles branching from the goal state is sufficient to achieve near-optimal solutions.
arXiv Detail & Related papers (2021-06-06T15:38:50Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.