An automated threshold Edge Drawing algorithm
- URL: http://arxiv.org/abs/2110.05119v1
- Date: Mon, 11 Oct 2021 09:53:18 GMT
- Title: An automated threshold Edge Drawing algorithm
- Authors: Ciprian Orhei, Muguras Mocofan, Silviu Vert, Radu Vasiu
- Abstract summary: We present a version of Edge Drawing algorithm in which we include an automated threshold choosing step.
Results are presented to sustain the benefits of the proposed automated threshold scheme.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Parameter choosing in classical edge detection algorithms can be a costly and
complex task. Choosing the correct parameters can improve considerably the
resulting edge-map. In this paper we present a version of Edge Drawing
algorithm in which we include an automated threshold choosing step. To better
highlight the effect of this additional step we use different first order
operators in the algorithm. Visual and statistical results are presented to
sustain the benefits of the proposed automated threshold scheme.
Related papers
- Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems [0.7614628596146599]
We propose a novel algorithm for adaptive step length selection in the classical SGD framework.
Under reasonable conditions, the algorithm produces step lengths in line with well-established theoretical requirements.
We show that the algorithm can generate step lengths comparable to the best step length obtained from manual tuning.
arXiv Detail & Related papers (2023-05-17T06:22:11Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Dynamical softassign and adaptive parameter tuning for graph matching [0.7456521449098222]
We study a unified framework for graph matching problems called the constrained gradient algorithms.
Our contributed adaptive step size parameter can guarantee the underlying algorithms' convergence.
We propose a novel graph matching algorithm: the softassign constrained gradient method.
arXiv Detail & Related papers (2022-08-17T11:25:03Z) - Per-run Algorithm Selection with Warm-starting using Trajectory-based
Features [5.073358743426584]
Per-instance algorithm selection seeks to recommend, for a given problem instance, one or several suitable algorithms.
We propose an online algorithm selection scheme which we coin per-run algorithm selection.
We show that our approach outperforms static per-instance algorithm selection.
arXiv Detail & Related papers (2022-04-20T14:30:42Z) - Trajectory-based Algorithm Selection with Warm-starting [2.3823600586675724]
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances.
We show promising performance of the trajectory-based per-run algorithm selection with warm-starting.
arXiv Detail & Related papers (2022-04-13T14:00:55Z) - Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm
Selection [0.9281671380673306]
A key challenge in the application of evolutionary algorithms is the selection of an algorithm instance that best suits the problem at hand.
We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems.
arXiv Detail & Related papers (2021-02-12T12:27:02Z) - Generalization in portfolio-based algorithm selection [97.74604695303285]
We provide the first provable guarantees for portfolio-based algorithm selection.
We show that if the portfolio is large, overfitting is inevitable, even with an extremely simple algorithm selector.
arXiv Detail & Related papers (2020-12-24T16:33:17Z) - Stochastic Adaptive Line Search for Differentially Private Optimization [6.281099620056346]
The performance of private gradient-based optimization algorithms is highly dependent on the choice step size (or learning rate)
We introduce a variant of classic non-trivial line search algorithm that adjusts the privacy gradient according to the reliability of noisy gradient.
We show that the adaptively chosen step sizes allow the proposed algorithm to efficiently use the privacy budget and show competitive performance against existing private gradients.
arXiv Detail & Related papers (2020-08-18T15:18:47Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.