A Double Inertial Forward-Backward Splitting Algorithm With Applications to Regression and Classification Problems
- URL: http://arxiv.org/abs/2505.03794v1
- Date: Thu, 01 May 2025 16:54:41 GMT
- Title: A Double Inertial Forward-Backward Splitting Algorithm With Applications to Regression and Classification Problems
- Authors: İrfan Işik, Ibrahim Karahan, Okan Erkaymaz,
- Abstract summary: This paper presents an improved forward-backward splitting algorithm with two inertial parameters.<n>It aims to find a point in the real Hilbert space at which the sum of a co-coercive operator and a maximal monotone operator vanishes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an improved forward-backward splitting algorithm with two inertial parameters. It aims to find a point in the real Hilbert space at which the sum of a co-coercive operator and a maximal monotone operator vanishes. Under standard assumptions, our proposed algorithm demonstrates weak convergence. We present numerous experimental results to demonstrate the behavior of the developed algorithm by comparing it with existing algorithms in the literature for regression and data classification problems. Furthermore, these implementations suggest our proposed algorithm yields superior outcomes when benchmarked against other relevant algorithms in existing literature.
Related papers
- Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits [5.847084649531299]
Thompson sampling and Greedy demonstrate promising empirical performance, yet this contrasts with their pessimistic theoretical regret bounds.<n>We propose a new data-driven technique that tracks the geometric properties of the uncertainty ellipsoid around the main problem parameter.<n>This methodology enables us to formulate a data-driven frequentist regret bound, which incorporates the geometric information, for a broad class of base algorithms, including Greedy, OFUL, and Thompson sampling.
arXiv Detail & Related papers (2023-06-26T17:38:45Z) - Bounded Projection Matrix Approximation with Applications to Community
Detection [1.8876415010297891]
We introduce a new differentiable convex penalty and derive an alternating direction method of multipliers (ADMM) algorithm.
Numerical experiments demonstrate the superiority of our algorithm over its competitors.
arXiv Detail & Related papers (2023-05-21T06:55:10Z) - A Novel Framework for Improving the Breakdown Point of Robust Regression
Algorithms [1.9594639581421422]
We present an effective framework for improving the breakdown point of robust regression algorithms.
We derive a consistent robust regression algorithm with iterative local search (CORALS)
arXiv Detail & Related papers (2023-05-20T15:59:33Z) - Multivariate Systemic Risk Measures and Computation by Deep Learning
Algorithms [63.03966552670014]
We discuss the key related theoretical aspects, with a particular focus on the fairness properties of primal optima and associated risk allocations.
The algorithms we provide allow for learning primals, optima for the dual representation and corresponding fair risk allocations.
arXiv Detail & Related papers (2023-02-02T22:16:49Z) - GBOSE: Generalized Bandit Orthogonalized Semiparametric Estimation [3.441021278275805]
We propose a new algorithm with a semi-parametric reward model with state-of-the-art complexity of upper bound on regret.
Our work expands the scope of another representative algorithm of state-of-the-art complexity with a similar reward model by proposing an algorithm built upon the same action filtering procedures.
We derive the said complexity of the upper bound on regret and present simulation results that affirm our methods superiority out of all prevalent semi-parametric bandit algorithms for cases involving over two arms.
arXiv Detail & Related papers (2023-01-20T19:39:10Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Sharp global convergence guarantees for iterative nonconvex
optimization: A Gaussian process perspective [30.524043513721168]
We develop a general recipe for analyzing the convergence of iterative algorithms for a class of regression models.
deterministicly, we accurately capture both the convergence rate of the algorithm and the eventual error floor in the finite-sample regime.
We show sharp convergence rates for both higher-order algorithms based on alternating updates and first-order algorithms based on subgradient subgradients.
arXiv Detail & Related papers (2021-09-20T21:48:19Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - SONIA: A Symmetric Blockwise Truncated Optimization Algorithm [2.9923891863939938]
This work presents a new algorithm for empirical risk.
The algorithm bridges the gap between first- and second-order search methods by computing a second-order search-type update in one subspace, coupled with a scaled steepest descent step in the Theoretical complement.
arXiv Detail & Related papers (2020-06-06T19:28:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.