Exact Pareto Optimal Search for Multi-Task Learning and Multi-Criteria
Decision-Making
- URL: http://arxiv.org/abs/2108.00597v2
- Date: Sun, 17 Sep 2023 13:29:55 GMT
- Title: Exact Pareto Optimal Search for Multi-Task Learning and Multi-Criteria
Decision-Making
- Authors: Debabrata Mahapatra, Vaibhav Rajan
- Abstract summary: We show that EPO Search converges to an EPO solution at a linear rate of convergence.
We develop new algorithms: PESA-EPO for approximating the PF in a posteriori MCDM, and GP-EPO for elicitation in interactive MCDM.
EPO Search scales linearly with the number of variables, which enables its use for deep e-commerce networks.
- Score: 10.914300987810128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given multiple non-convex objective functions and objective-specific weights,
Chebyshev scalarization (CS) is a well-known approach to obtain an Exact Pareto
Optimal (EPO), i.e., a solution on the Pareto front (PF) that intersects the
ray defined by the inverse of the weights. First-order optimizers that use the
CS formulation to find EPO solutions encounter practical problems of
oscillations and stagnation that affect convergence. Moreover, when initialized
with a PO solution, they do not guarantee a controlled trajectory that lies
completely on the PF. These shortcomings lead to modeling limitations and
computational inefficiency in multi-task learning (MTL) and multi-criteria
decision-making (MCDM) methods that utilize CS for their underlying non-convex
multi-objective optimization (MOO). To address these shortcomings, we design a
new MOO method, EPO Search. We prove that EPO Search converges to an EPO
solution and empirically illustrate its computational efficiency and robustness
to initialization. When initialized on the PF, EPO Search can trace the PF and
converge to the required EPO solution at a linear rate of convergence. Using
EPO Search we develop new algorithms: PESA-EPO for approximating the PF in a
posteriori MCDM, and GP-EPO for preference elicitation in interactive MCDM;
experiments on benchmark datasets confirm their advantages over competing
alternatives. EPO Search scales linearly with the number of decision variables
which enables its use for training deep networks. Empirical results on real
data from personalized medicine, e-commerce and hydrometeorology demonstrate
the efficacy of EPO Search for deep MTL.
Related papers
- Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation [29.579349371114702]
Direct Preference Optimization (DPO) is a cost-effective alternative to reinforcement learning (RL) for large language models (LLMs)
We show that a single round of DPO with coarse filtering significantly enhances mathematical reasoning performance.
With simple verifiable rewards, our model achieves RL-level performance with significantly lower computational overhead.
arXiv Detail & Related papers (2025-03-17T06:28:25Z) - A Survey of Direct Preference Optimization [103.59317151002693]
Large Language Models (LLMs) have demonstrated unprecedented generative capabilities.
Their alignment with human values remains critical for ensuring helpful and harmless deployments.
Direct Preference Optimization (DPO) has recently gained prominence as a streamlined alternative.
arXiv Detail & Related papers (2025-03-12T08:45:15Z) - A Simple and Effective Reinforcement Learning Method for Text-to-Image Diffusion Fine-tuning [61.403275660120606]
Reinforcement learning (RL)-based fine-tuning has emerged as a powerful approach for aligning diffusion models with black-box objectives.
We propose leave-one-out PPO (LOOP), a novel RL for diffusion fine-tuning method.
Our results demonstrate that LOOP effectively improves diffusion models on various black-box objectives, and achieves a better balance between computational efficiency and performance.
arXiv Detail & Related papers (2025-03-02T13:43:53Z) - Efficient and Scalable Deep Reinforcement Learning for Mean Field Control Games [16.62770187749295]
Mean Field Control Games (MFCGs) provide a powerful theoretical framework for analyzing systems of infinitely many interacting agents.
This paper presents a scalable deep Reinforcement Learning (RL) approach to approximate equilibrium solutions of MFCGs.
arXiv Detail & Related papers (2024-12-28T02:04:53Z) - MPPO: Multi Pair-wise Preference Optimization for LLMs with Arbitrary Negative Samples [22.521746860874305]
This study introduces the MPPO algorithm, which leverages the average likelihood of model responses to fit the reward function.
Through a comparison of Point-wise, Pair-wise, and List-wise implementations, we found that the Pair-wise approach achieves the best performance.
Experimental results demonstrate MPPO's outstanding performance across various benchmarks.
arXiv Detail & Related papers (2024-12-13T14:18:58Z) - Hierarchical Preference Optimization: Learning to achieve goals via feasible subgoals prediction [71.81851971324187]
This work introduces Hierarchical Preference Optimization (HPO), a novel approach to hierarchical reinforcement learning (HRL)
HPO addresses non-stationarity and infeasible subgoal generation issues when solving complex robotic control tasks.
Experiments on challenging robotic navigation and manipulation tasks demonstrate impressive performance of HPO, where it shows an improvement of up to 35% over the baselines.
arXiv Detail & Related papers (2024-11-01T04:58:40Z) - LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting [56.92178753201331]
We propose the Observation-Aware Spectral (OAS) estimation technique, which enables the POMDP parameters to be learned from samples collected using a belief-based policy.
We show the consistency of the OAS procedure, and we prove a regret guarantee of order $mathcalO(sqrtT log(T)$ for the proposed OAS-UCRL algorithm.
arXiv Detail & Related papers (2024-10-02T08:46:34Z) - The Hitchhiker's Guide to Human Alignment with *PO [43.4130314879284]
We focus on identifying the algorithm that, while being performant, is simultaneously more robust to varying hyper parameters.
Our analysis reveals that the widely adopted DPO method consistently produces lengthy responses of inferior quality.
Motivated by these findings, we propose an embarrassingly simple extension to the DPO algorithm, LN-DPO, resulting in more concise responses without sacrificing quality.
arXiv Detail & Related papers (2024-07-21T17:35:20Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - An Efficient Approach for Solving Expensive Constrained Multiobjective Optimization Problems [0.0]
An efficient probabilistic selection based constrained multi-objective EA is proposed, referred to as PSCMOEA.
It comprises novel elements such as (a) an adaptive search bound identification scheme based on the feasibility and convergence status of evaluated solutions.
Numerical experiments are conducted on an extensive range of challenging constrained problems using low evaluation budgets to simulate ECMOPs.
arXiv Detail & Related papers (2024-05-22T02:32:58Z) - Surpassing legacy approaches to PWR core reload optimization with single-objective Reinforcement learning [0.0]
We have developed methods based on Deep Reinforcement Learning (DRL) for both single- and multi-objective optimization.
In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO)
PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global and local search method.
arXiv Detail & Related papers (2024-02-16T19:35:58Z) - Secrets of RLHF in Large Language Models Part I: PPO [81.01936993929127]
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence.
reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit.
In this report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training.
arXiv Detail & Related papers (2023-07-11T01:55:24Z) - Enhancing Explainability of Hyperparameter Optimization via Bayesian
Algorithm Execution [13.037647287689438]
We study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots.
We propose a modified HPO method which efficiently searches for optimum global predictive performance.
Our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.
arXiv Detail & Related papers (2022-06-11T07:12:04Z) - Amortized Proximal Optimization [11.441395750267052]
Amortized Proximal Optimization (APO) is a framework for online meta-optimization of parameters that govern optimization.
We show how APO can be used to adapt a learning rate or a structured preconditioning matrix.
We empirically test APO for online adaptation of learning rates and structured preconditioning for regression, image reconstruction, image classification, and natural language translation tasks.
arXiv Detail & Related papers (2022-02-28T20:50:48Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.