Using a New Nonlinear Gradient Method for Solving Large Scale Convex
Optimization Problems with an Application on Arabic Medical Text
- URL: http://arxiv.org/abs/2106.04383v2
- Date: Wed, 9 Jun 2021 12:27:53 GMT
- Title: Using a New Nonlinear Gradient Method for Solving Large Scale Convex
Optimization Problems with an Application on Arabic Medical Text
- Authors: Jaafar Hammoud and Ali Eisa and Natalia Dobrenko and Natalia Gusarova
- Abstract summary: We present a nonlinear gradient method for solving convex supra-quadratic functions.
Also presented is an application to the problem of named entities in the Arabic medical language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gradient methods have applications in multiple fields, including signal
processing, image processing, and dynamic systems. In this paper, we present a
nonlinear gradient method for solving convex supra-quadratic functions by
developing the search direction, that done by hybridizing between the two
conjugate coefficients HRM [2] and NHS [1]. The numerical results proved the
effectiveness of the presented method by applying it to solve standard problems
and reaching the exact solution if the objective function is quadratic convex.
Also presented in this article, an application to the problem of named entities
in the Arabic medical language, as it proved the stability of the proposed
method and its efficiency in terms of execution time.
Related papers
- Neural Implicit Solution Formula for Efficiently Solving Hamilton-Jacobi Equations [0.0]
An implicit solution formula is presented for the Hamilton-Jacobi partial differential equation (HJ PDE)
A deep learning-based methodology is proposed to learn this implicit solution formula.
An algorithm is developed that approximates the characteristic curves for state-dependent Hamiltonians.
arXiv Detail & Related papers (2025-01-31T17:56:09Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - A conditional gradient homotopy method with applications to Semidefinite Programming [1.3332839594069592]
homotopy-based conditional gradient method for solving convex optimization problems with a large number of simple conic constraints.
Our theoretical complexity is competitive when confronted to state-of-the-art SDP, with the decisive advantage of cheap projection-frees.
arXiv Detail & Related papers (2022-07-07T05:48:27Z) - An Operator-Splitting Method for the Gaussian Curvature Regularization
Model with Applications in Surface Smoothing and Imaging [6.860238280163609]
We propose an operator-splitting method for a general Gaussian curvature model.
The proposed method is not sensitive to the choice of parameters, its efficiency and performances being demonstrated.
arXiv Detail & Related papers (2021-08-04T08:59:41Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Conditional Gradient Methods for Convex Optimization with General Affine
and Nonlinear Constraints [8.643249539674612]
This paper presents new conditional gradient methods for solving convex optimization problems with general affine and nonlinear constraints.
We first present a new constraint extrapolated condition gradient (CoexCG) method that can achieve an $cal O (1/epsilon2)$ iteration complexity for both smooth and structured nonsmooth function constrained convex optimization.
We further develop novel variants of CoexCG, namely constraint extrapolated and dual regularized conditional gradient (CoexDurCG) methods, that can achieve similar iteration complexity to CoexCG but allow adaptive selection for algorithmic parameters.
arXiv Detail & Related papers (2020-06-30T23:49:38Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.