Convergence and Running Time of Time-dependent Ant Colony Algorithms
- URL: http://arxiv.org/abs/2501.10810v1
- Date: Sat, 18 Jan 2025 16:20:39 GMT
- Title: Convergence and Running Time of Time-dependent Ant Colony Algorithms
- Authors: Bodo Manthey, Jesse van Rhijn, Ashkan Safari, Tjark Vredeveld,
- Abstract summary: Ant Colony Optimization (ACO) is a well-known method inspired by the foraging behavior of ants.<n>We consider two time-dependent adaptations of Attiratanasunthron and Fakcharoenphol's $n$-ANT algorithm.<n>Our results show that $n$-ANT/tdev has a super-polynomial time lower bound on the shortest path problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ant Colony Optimization (ACO) is a well-known method inspired by the foraging behavior of ants and is extensively used to solve combinatorial optimization problems. In this paper, we first consider a general framework based on the concept of a construction graph - a graph associated with an instance of the optimization problem under study, where feasible solutions are represented by walks. We analyze the running time of this ACO variant, known as the Graph-based Ant System with time-dependent evaporation rate (GBAS/tdev), and prove that the algorithm's solution converges to the optimal solution of the problem with probability 1 for a slightly stronger evaporation rate function than was previously known. We then consider two time-dependent adaptations of Attiratanasunthron and Fakcharoenphol's $n$-ANT algorithm: $n$-ANT with time-dependent evaporation rate ($n$-ANT/tdev) and $n$-ANT with time-dependent lower pheromone bound ($n$-ANT/tdlb). We analyze both variants on the single destination shortest path problem (SDSP). Our results show that $n$-ANT/tdev has a super-polynomial time lower bound on the SDSP. In contrast, we show that $n$-ANT/tdlb achieves a polynomial time upper bound on this problem.
Related papers
- Pathwise optimization for bridge-type estimators and its applications [49.1574468325115]
Pathwise methods allow to efficiently compute the full path for penalized estimators.<n>We apply these algorithms to the penalized estimation of processes observed at discrete times.
arXiv Detail & Related papers (2024-12-05T10:38:29Z) - Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Analyzing the Runtime of the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) on the Concatenated Trap Function [2.038038953957366]
GOMEA is an evolutionary algorithm that leverages linkage learning to efficiently exploit problem structure.
We show that GOMEA can solve the problem in $O(m32k)$ with high probability, where $m$ is the number of subfunctions and $k$ is the subfunction length.
This is a significant speedup compared to the (1+1) Evolutionary EA, which requires $O(ln(m)(mk)k)$ expected evaluations.
arXiv Detail & Related papers (2024-07-11T09:37:21Z) - Markov Chain-based Optimization Time Analysis of Bivalent Ant Colony Optimization for Sorting and LeadingOnes [0.0]
We show that the ratio of the two pheromone values significantly governs the runtime behavior of Bivalent ACO (BACO)
We show that despite we have a drastically simplified ant algorithm with respect to the influence of the pheromones on the solving process, known bounds on the expected optimization time for the problems OneMax ($O(nlog n)$) and LeadingOnes ($O(n2)$) can be re-produced as a by-product of our approach.
arXiv Detail & Related papers (2024-05-06T11:02:50Z) - DASA: Delay-Adaptive Multi-Agent Stochastic Approximation [64.32538247395627]
We consider a setting in which $N$ agents aim to speedup a common Approximation problem by acting in parallel and communicating with a central server.
To mitigate the effect of delays and stragglers, we propose textttDASA, a Delay-Adaptive algorithm for multi-agent Approximation.
arXiv Detail & Related papers (2024-03-25T22:49:56Z) - Fast and Regret Optimal Best Arm Identification: Fundamental Limits and Low-Complexity Algorithms [15.038210624870656]
Multi-Armed Bandit (MAB) problem with dual objectives: (i) quick identification and commitment to the optimal arm, and (ii) reward throughout a sequence of $T$ consecutive rounds.
This paper introduces emphRegret Best Arm Identification (ROBAI) which aims to achieve these dual objectives.
arXiv Detail & Related papers (2023-09-01T17:12:43Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Non-stationary Online Convex Optimization with Arbitrary Delays [50.46856739179311]
This paper investigates the delayed online convex optimization (OCO) in non-stationary environments.
We first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order.
We develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to $O(sqrtbardT(P_T+1))$.
arXiv Detail & Related papers (2023-05-20T07:54:07Z) - Fast Learning for Renewal Optimization in Online Task Scheduling [18.935043109084543]
This paper considers online optimization of a renewal-reward system.
The probability distribution for the task type vector is unknown.
The proposed algorithm uses an auxiliary variable that is updated according to a classic Robbins-Monro iteration.
arXiv Detail & Related papers (2020-07-18T22:44:13Z) - A new regret analysis for Adam-type algorithms [78.825194932103]
In theory, regret guarantees for online convex optimization require a rapidly decaying $beta_1to0$ schedule.
We propose a novel framework that allows us to derive optimal, data-dependent regret bounds with a constant $beta_1$.
arXiv Detail & Related papers (2020-03-21T19:19:51Z) - Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in
the Presence of Stragglers [31.309253646602933]
We consider the setting where a master wants to run a distributed gradient descent (SGD) algorithm on $n$ workers each having a subset of the data.
Distributed SGD may suffer from the effect of stragglers, i.e., slow or unresponsive workers who cause delays.
We propose an algorithm for adaptive distributed SGD that is based on a statistical notion.
arXiv Detail & Related papers (2020-02-25T16:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.