Generalization In Multi-Objective Machine Learning
- URL: http://arxiv.org/abs/2208.13499v1
- Date: Mon, 29 Aug 2022 11:06:39 GMT
- Title: Generalization In Multi-Objective Machine Learning
- Authors: Peter S\'uken\'ik and Christoph H. Lampert
- Abstract summary: Multi-objective learning offers a natural framework for handling such problems without having to commit to early trade-offs.
statistical learning theory so far offers almost no insight into the generalization properties of multi-objective learning.
- Score: 27.806085423595334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern machine learning tasks often require considering not just one but
multiple objectives. For example, besides the prediction quality, this could be
the efficiency, robustness or fairness of the learned models, or any of their
combinations. Multi-objective learning offers a natural framework for handling
such problems without having to commit to early trade-offs. Surprisingly,
statistical learning theory so far offers almost no insight into the
generalization properties of multi-objective learning. In this work, we make
first steps to fill this gap: we establish foundational generalization bounds
for the multi-objective setting as well as generalization and excess bounds for
learning with scalarizations. We also provide the first theoretical analysis of
the relation between the Pareto-optimal sets of the true objectives and the
Pareto-optimal sets of their empirical approximations from training data. In
particular, we show a surprising asymmetry: all Pareto-optimal solutions can be
approximated by empirically Pareto-optimal ones, but not vice versa.
Related papers
- Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Pareto Manifold Learning: Tackling multiple tasks via ensembles of
single-task models [50.33956216274694]
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution.
We propose textitPareto Manifold Learning, an ensembling method in weight space.
arXiv Detail & Related papers (2022-10-18T11:20:54Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Machine Learning for Combinatorial Optimisation of Partially-Specified
Problems: Regret Minimisation as a Unifying Lens [34.87439325210671]
It is increasingly common to solve optimisation problems that are partially-specified.
The challenge is to learn them from available data, while taking into account a set of hard constraints.
This paper overviews four seemingly unrelated approaches, that can each be viewed as learning the objective function of a hard optimisation problem.
arXiv Detail & Related papers (2022-05-20T13:06:29Z) - Masked prediction tasks: a parameter identifiability view [49.533046139235466]
We focus on the widely used self-supervised learning method of predicting masked tokens.
We show that there is a rich landscape of possibilities, out of which some prediction tasks yield identifiability, while others do not.
arXiv Detail & Related papers (2022-02-18T17:09:32Z) - Pareto Navigation Gradient Descent: a First-Order Algorithm for
Optimization in Pareto Set [17.617944390196286]
Modern machine learning applications, such as multi-task learning, require finding optimal model parameters to trade-off multiple objective functions.
We propose a first-order algorithm that approximately solves OPT-in-Pareto using only gradient information.
arXiv Detail & Related papers (2021-10-17T04:07:04Z) - Multi-Objective Learning to Predict Pareto Fronts Using Hypervolume
Maximization [0.0]
Real-world problems are often multi-objective with decision-makers unable to specify a priori which trade-off between the conflicting objectives is preferable.
We propose a novel learning approach to estimate the Pareto front by maximizing the dominated hypervolume (HV) of the average loss vectors corresponding to a set of learners.
In our approach, the set of learners are trained multi-objectively with a dynamic loss function, wherein each learner's losses are weighted by their HV maximizing gradients.
Experiments on three different multi-objective tasks show that the outputs of the set of learners are indeed well-spread on
arXiv Detail & Related papers (2021-02-08T20:41:21Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z) - Efficient Continuous Pareto Exploration in Multi-Task Learning [34.41682709915956]
We present a novel, efficient method for continuous analysis of optimal solutions in machine learning problems.
We scale up theoretical results in multi-objective optimization to modern machine learning problems by proposing a sample-based sparse linear system.
arXiv Detail & Related papers (2020-06-29T23:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.