Efficient Continuous Pareto Exploration in Multi-Task Learning
- URL: http://arxiv.org/abs/2006.16434v2
- Date: Wed, 26 Aug 2020 20:48:16 GMT
- Title: Efficient Continuous Pareto Exploration in Multi-Task Learning
- Authors: Pingchuan Ma, Tao Du, Wojciech Matusik
- Abstract summary: We present a novel, efficient method for continuous analysis of optimal solutions in machine learning problems.
We scale up theoretical results in multi-objective optimization to modern machine learning problems by proposing a sample-based sparse linear system.
- Score: 34.41682709915956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tasks in multi-task learning often correlate, conflict, or even compete with
each other. As a result, a single solution that is optimal for all tasks rarely
exists. Recent papers introduced the concept of Pareto optimality to this field
and directly cast multi-task learning as multi-objective optimization problems,
but solutions returned by existing methods are typically finite, sparse, and
discrete. We present a novel, efficient method that generates locally
continuous Pareto sets and Pareto fronts, which opens up the possibility of
continuous analysis of Pareto optimal solutions in machine learning problems.
We scale up theoretical results in multi-objective optimization to modern
machine learning problems by proposing a sample-based sparse linear system, for
which standard Hessian-free solvers in machine learning can be applied. We
compare our method to the state-of-the-art algorithms and demonstrate its usage
of analyzing local Pareto sets on various multi-task classification and
regression problems. The experimental results confirm that our algorithm
reveals the primary directions in local Pareto sets for trade-off balancing,
finds more solutions with different trade-offs efficiently, and scales well to
tasks with millions of parameters.
Related papers
- Efficient Pareto Manifold Learning with Low-Rank Structure [31.082432589391953]
Multi-task learning is inherently a multi-objective optimization problem.
We propose a novel approach that integrates a main network with several low-rank matrices.
It significantly reduces the number of parameters and facilitates the extraction of shared features.
arXiv Detail & Related papers (2024-07-30T11:09:27Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Multi-Task Learning with Multi-Task Optimization [31.518330903602095]
We show that a set of optimized yet well-distributed models embody different trade-offs in one algorithmic pass.
We investigate the proposed multi-task learning with multi-task optimization for solving various problem settings.
arXiv Detail & Related papers (2024-03-24T14:04:40Z) - Pareto Manifold Learning: Tackling multiple tasks via ensembles of
single-task models [50.33956216274694]
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution.
We propose textitPareto Manifold Learning, an ensembling method in weight space.
arXiv Detail & Related papers (2022-10-18T11:20:54Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - In Defense of the Unitary Scalarization for Deep Multi-Task Learning [121.76421174107463]
We present a theoretical analysis suggesting that many specialized multi-tasks can be interpreted as forms of regularization.
We show that, when coupled with standard regularization and stabilization techniques, unitary scalarization matches or improves upon the performance of complex multitasks.
arXiv Detail & Related papers (2022-01-11T18:44:17Z) - Scalable Uni-directional Pareto Optimality for Multi-Task Learning with
Constraints [4.4044968357361745]
We propose a scalable MOO solver for Multi-Objective (MOO) problems, including support for optimization under constraints.
An important application of this is to estimate high-dimensional runtime for neural classification tasks.
arXiv Detail & Related papers (2021-10-28T21:35:59Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z) - Pareto Multi-Task Learning [53.90732663046125]
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously.
It is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other.
Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization.
arXiv Detail & Related papers (2019-12-30T08:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.