Multi-Phase Relaxation Labeling for Square Jigsaw Puzzle Solving
- URL: http://arxiv.org/abs/2303.14793v1
- Date: Sun, 26 Mar 2023 18:53:51 GMT
- Title: Multi-Phase Relaxation Labeling for Square Jigsaw Puzzle Solving
- Authors: Ben Vardi, Alessandro Torcinovich, Marina Khoroshiltseva, Marcello
Pelillo, Ohad Ben-Shahar
- Abstract summary: We present a novel method for solving square jigsaw puzzles based on global optimization.
The method is fully automatic, assumes no prior information, and can handle puzzles with known or unknown piece orientation.
- Score: 73.58829980121767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel method for solving square jigsaw puzzles based on global
optimization. The method is fully automatic, assumes no prior information, and
can handle puzzles with known or unknown piece orientation. At the core of the
optimization process is nonlinear relaxation labeling, a well-founded approach
for deducing global solutions from local constraints, but unlike the classical
scheme here we propose a multi-phase approach that guarantees convergence to
feasible puzzle solutions. Next to the algorithmic novelty, we also present a
new compatibility function for the quantification of the affinity between
adjacent puzzle pieces. Competitive results and the advantage of the
multi-phase approach are demonstrated on standard datasets.
Related papers
- Actively Learning Combinatorial Optimization Using a Membership Oracle [10.834947136134463]
We consider solving an optimization problem with an unknown linear constraint using a membership oracle.
The goal of the decision maker is to find the best possible solution subject to a budget on the number of oracle calls.
We adapt a classical framework in order to solve the problem by learning and exploiting a surrogate linear constraint.
arXiv Detail & Related papers (2024-05-23T01:34:21Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Reinforcement Learning Methods for Wordle: A POMDP/Adaptive Control
Approach [0.3093890460224435]
We address the solution of the popular Wordle puzzle, using new reinforcement learning methods.
For the Wordle puzzle, they yield on-line solution strategies that are very close to optimal at relatively modest computational cost.
arXiv Detail & Related papers (2022-11-15T03:46:41Z) - Optimization of Robot Trajectory Planning with Nature-Inspired and
Hybrid Quantum Algorithms [0.0]
We solve robot trajectory planning problems at industry-relevant scales.
Our end-to-end solution integrates highly versatile random-key algorithms with model stacking and ensemble techniques.
We show how the latter can be integrated into our larger pipeline, providing a quantum-ready hybrid solution to the problem.
arXiv Detail & Related papers (2022-06-08T02:38:32Z) - An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural
Network with Group Sparsity [13.27709100571336]
A leaky ReLU network with a group regularization term has been widely used in the recent years.
We show that there is a lack of approaches to compute a stationary point deterministically.
We propose an inexact augmented Lagrangian algorithm for solving the new model.
arXiv Detail & Related papers (2022-05-11T11:53:15Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Stochastic gradient algorithms from ODE splitting perspective [0.0]
We present a different view on optimization, which goes back to the splitting schemes for approximate solutions of ODE.
In this work, we provide a connection between descent approach and gradient first-order splitting scheme for ODE.
We consider the special case of splitting, which is inspired by machine learning applications and derive a new upper bound on the global splitting error for it.
arXiv Detail & Related papers (2020-04-19T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.