Approximate FW Algorithm with a novel DMO method over Graph-structured Support Set
- URL: http://arxiv.org/abs/2411.04389v2
- Date: Mon, 25 Nov 2024 07:20:43 GMT
- Title: Approximate FW Algorithm with a novel DMO method over Graph-structured Support Set
- Authors: Yijian Pan, Hongjiao Qiang,
- Abstract summary: We analyzed and implemented the original algorithm and introduced some extensions based on that.
Our backtracking line-search method effectively reduced the number of iterations, while our new DMO method did not make satisfying enough improvements.
- Score: 0.2302001830524133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this project, we reviewed a paper that deals graph-structured convex optimization (GSCO) problem with the approximate Frank-Wolfe (FW) algorithm. We analyzed and implemented the original algorithm and introduced some extensions based on that. Then we conducted experiments to compare the results and concluded that our backtracking line-search method effectively reduced the number of iterations, while our new DMO method (Top-g+ optimal visiting) did not make satisfying enough improvements.
Related papers
- Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits [55.938644481736446]
Indexed Minimum Empirical Divergence (IMED) is a highly effective approach to the multi-armed bandit problem.
It has been observed to empirically outperform UCB-based algorithms and Thompson Sampling.
We present novel linear versions of the IMED algorithm, which we call the family of LinIMED algorithms.
arXiv Detail & Related papers (2024-05-24T04:11:58Z) - Projected Forward Gradient-Guided Frank-Wolfe Algorithm via Variance Reduction [0.0]
This paper aims to enhance the use of the Frank-Wolfe (FW) algorithm for training deep neural networks.
Similar to any-based algorithm, FW suffers from high computational memory costs when computing for DNNs.
arXiv Detail & Related papers (2024-03-19T07:25:36Z) - Improving sample efficiency of high dimensional Bayesian optimization
with MCMC [7.241485121318798]
We propose a new method based on Markov Chain Monte Carlo to efficiently sample from an approximated posterior.
We show experimentally that both the Metropolis-Hastings and the Langevin Dynamics version of our algorithm outperform state-of-the-art methods in high-dimensional sequential optimization and reinforcement learning benchmarks.
arXiv Detail & Related papers (2024-01-05T05:56:42Z) - Moreau Envelope ADMM for Decentralized Weakly Convex Optimization [55.2289666758254]
This paper proposes a proximal variant of the alternating direction method of multipliers (ADMM) for distributed optimization.
The results of our numerical experiments indicate that our method is faster and more robust than widely-used approaches.
arXiv Detail & Related papers (2023-08-31T14:16:30Z) - Performance Evaluation and Comparison of a New Regression Algorithm [4.125187280299247]
We compare the performance of a newly proposed regression algorithm against four conventional machine learning algorithms.
The reader is free to replicate our results since we have provided the source code in a GitHub repository.
arXiv Detail & Related papers (2023-06-15T13:01:16Z) - A Novel Framework for Improving the Breakdown Point of Robust Regression
Algorithms [1.9594639581421422]
We present an effective framework for improving the breakdown point of robust regression algorithms.
We derive a consistent robust regression algorithm with iterative local search (CORALS)
arXiv Detail & Related papers (2023-05-20T15:59:33Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Converting ADMM to a Proximal Gradient for Convex Optimization Problems [4.56877715768796]
In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem.
This paper proposes a general method for converting the ADMM solution to the proximal gradient method, assuming that the constraints and objectives are strongly convex.
We show by numerical experiments that we can obtain a significant improvement in terms of efficiency.
arXiv Detail & Related papers (2021-04-22T07:41:12Z) - Enhancing Balanced Graph Edge Partition with Effective Local Search [41.4257628524592]
Graph partition is a key component to achieve workload balance and reduce job completion time in parallel graph processing systems.
In this paper, we study local search algorithms for this problem to further improve the partition results from existing methods.
arXiv Detail & Related papers (2020-12-17T08:58:06Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.