Optimizing Evaluation Metrics for Multi-Task Learning via the
Alternating Direction Method of Multipliers
- URL: http://arxiv.org/abs/2210.05935v1
- Date: Wed, 12 Oct 2022 05:46:00 GMT
- Title: Optimizing Evaluation Metrics for Multi-Task Learning via the
Alternating Direction Method of Multipliers
- Authors: Ge-Yang Ke, Yan Pan, Jian Yin, Chang-Qin Huang
- Abstract summary: Multi-task learning (MTL) aims to improve the generalization performance of multiple tasks by exploiting the shared factors among them.
Most existing MTL methods try to minimize either the misclassified errors for classification or the mean squared errors for regression.
We propose a method to directly optimize the evaluation metrics for a large family of MTL problems.
- Score: 12.227732834969336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-task learning (MTL) aims to improve the generalization performance of
multiple tasks by exploiting the shared factors among them. Various metrics
(e.g., F-score, Area Under the ROC Curve) are used to evaluate the performances
of MTL methods. Most existing MTL methods try to minimize either the
misclassified errors for classification or the mean squared errors for
regression. In this paper, we propose a method to directly optimize the
evaluation metrics for a large family of MTL problems. The formulation of MTL
that directly optimizes evaluation metrics is the combination of two parts: (1)
a regularizer defined on the weight matrix over all tasks, in order to capture
the relatedness of these tasks; (2) a sum of multiple structured hinge losses,
each corresponding to a surrogate of some evaluation metric on one task. This
formulation is challenging in optimization because both of its parts are
non-smooth. To tackle this issue, we propose a novel optimization procedure
based on the alternating direction scheme of multipliers, where we decompose
the whole optimization problem into a sub-problem corresponding to the
regularizer and another sub-problem corresponding to the structured hinge
losses. For a large family of MTL problems, the first sub-problem has
closed-form solutions. To solve the second sub-problem, we propose an efficient
primal-dual algorithm via coordinate ascent. Extensive evaluation results
demonstrate that, in a large family of MTL problems, the proposed MTL method of
directly optimization evaluation metrics has superior performance gains against
the corresponding baseline methods.
Related papers
- A First-Order Multi-Gradient Algorithm for Multi-Objective Bi-Level Optimization [7.097069899573992]
We study the Multi-Objective Bi-Level Optimization (MOBLO) problem.
Existing gradient-based MOBLO algorithms need to compute the Hessian matrix.
We propose an efficient first-order multi-gradient method for MOBLO, called FORUM.
arXiv Detail & Related papers (2024-01-17T15:03:37Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Independent Component Alignment for Multi-Task Learning [2.5234156040689237]
In a multi-task learning (MTL) setting, a single model is trained to tackle a diverse set of tasks jointly.
We propose using a condition number of a linear system of gradients as a stability criterion of an MTL optimization.
We present Aligned-MTL, a novel MTL optimization approach based on the proposed criterion.
arXiv Detail & Related papers (2023-05-30T12:56:36Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Meta-learning based Alternating Minimization Algorithm for Non-convex
Optimization [9.774392581946108]
We propose a novel solution for challenging non-problems of multiple variables.
Our proposed approach is able to achieve effective iterations in cases while other methods would typically fail.
arXiv Detail & Related papers (2020-09-09T10:45:00Z) - Follow the bisector: a simple method for multi-objective optimization [65.83318707752385]
We consider optimization problems, where multiple differentiable losses have to be minimized.
The presented method computes descent direction in every iteration to guarantee equal relative decrease of objective functions.
arXiv Detail & Related papers (2020-07-14T09:50:33Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.