Multi-Objective Bilevel Learning
- URL: http://arxiv.org/abs/2511.07824v1
- Date: Wed, 12 Nov 2025 01:22:05 GMT
- Title: Multi-Objective Bilevel Learning
- Authors: Zhiyao Zhang, Zhuqing Liu, Xin Zhang, Wen-Yen Chen, Jiyan Yang, Jia Liu,
- Abstract summary: We investigate the theoretical and algorithmic foundation of multi-objective bilevel learning (MOBL)<n>Our goal is to develop efficient MOBL optimization algorithms.<n>We propose a unifying algorithmic framework called weighted-Chebyshev multi-hyper-gradient-descent (WC-MHGD)
- Score: 12.198330173886587
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning (ML) applications grow increasingly complex in recent years, modern ML frameworks often need to address multiple potentially conflicting objectives with coupled decision variables across different layers. This creates a compelling need for multi-objective bilevel learning (MOBL). So far, however, the field of MOBL remains in its infancy and many important problems remain under-explored. This motivates us to fill this gap and systematically investigate the theoretical and algorithmic foundation of MOBL. Specifically, we consider MOBL problems with multiple conflicting objectives guided by preferences at the upper-level subproblem, where part of the inputs depend on the optimal solution of the lower-level subproblem. Our goal is to develop efficient MOBL optimization algorithms to (1) identify a preference-guided Pareto-stationary solution with low oracle complexity; and (2) enable systematic Pareto front exploration. To this end, we propose a unifying algorithmic framework called weighted-Chebyshev multi-hyper-gradient-descent (WC-MHGD) for both deterministic and stochastic settings with finite-time Pareto-stationarity convergence rate guarantees, which not only implies low oracle complexity but also induces systematic Pareto front exploration. We further conduct extensive experiments to confirm our theoretical results.
Related papers
- MO-MIX: Multi-Objective Multi-Agent Cooperative Decision-Making With Deep Reinforcement Learning [68.91090643731987]
Deep reinforcement learning (RL) has been applied extensively to solve complex decision-making problems.<n>Existing approaches are limited to separate fields and can only handle multi-agent decision-making with a single objective.<n>We propose MO-mix to solve the multi-objective multi-agent reinforcement learning (MOMARL) problem.
arXiv Detail & Related papers (2026-02-28T16:25:22Z) - Solving the Granularity Mismatch: Hierarchical Preference Learning for Long-Horizon LLM Agents [56.625878022978945]
Large Language Models (LLMs) as autonomous agents are increasingly tasked with solving complex, long-horizon problems.<n>Direct Preference Optimization (DPO) provides a signal that is too coarse for precise credit assignment, while step-level DPO is often too myopic to capture the value of multi-step behaviors.<n>We introduce Hierarchical Preference Learning (HPL), a hierarchical framework that optimize LLM agents by leveraging preference signals at multiple, synergistic granularities.
arXiv Detail & Related papers (2025-09-26T08:43:39Z) - LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization [54.35609820607923]
Large language models (LLMs) offer new opportunities for assisting with algorithm design.<n>We propose LLM4CMO, a novel CMOEA based on a dual-population, two-stage framework.<n>LLMs can serve as efficient co-designers in the development of complex evolutionary optimization algorithms.
arXiv Detail & Related papers (2025-08-16T02:00:57Z) - Why Do Multi-Agent LLM Systems Fail? [87.90075668488434]
We introduce MAST-Data, a comprehensive dataset of 1600+ annotated traces collected across 7 popular MAS frameworks.<n>We build the first Multi-Agent System Failure taxonomy (MAST)<n>We leverage MAST and MAST-Data to analyze failure patterns across models (GPT4, Claude 3, Qwen2.5, CodeLlama) and tasks (coding, math, general agent)
arXiv Detail & Related papers (2025-03-17T19:04:38Z) - On Generalization Across Environments In Multi-Objective Reinforcement Learning [6.686583184622338]
We formalize the concept of generalization in Multi-Objective Reinforcement Learning (MORL) and how it can be evaluated.<n>We contribute a novel benchmark featuring diverse multi-objective domains with parameterized environment configurations.<n>Our baseline evaluations of state-of-the-art MORL algorithms on this benchmark reveals limited generalization capabilities, suggesting significant room for improvement.
arXiv Detail & Related papers (2025-03-02T08:50:14Z) - Common pitfalls to avoid while using multiobjective optimization in machine learning [1.1650821883155187]
There has been an increasing interest in the application of multiobjective optimization (MOO) in machine learning (ML)<n>Despite its potential, there is a noticeable lack of satisfactory literature serving as an entry-level guide for ML practitioners aiming to apply MOO effectively.<n>We critically review existing studies across various ML fields where MOO has been applied and identify challenges that can lead to incorrect interpretations.
arXiv Detail & Related papers (2024-05-02T17:12:25Z) - Federated Multi-Objective Learning [22.875284692358683]
We propose a new federated multi-objective learning (FMOL) framework with multiple clients.
Our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications.
For this FMOL framework, we propose two new federated multi-task optimization (FMOO) algorithms called federated multi-gradient descent averaging (FSMGDA) and federated multi-gradient descent averaging (FSMGDA)
arXiv Detail & Related papers (2023-10-15T15:45:51Z) - Bi-level Multi-objective Evolutionary Learning: A Case Study on
Multi-task Graph Neural Topology Search [47.59828447981408]
This paper proposes a bi-level multi-objective learning framework (BLMOL)
It coupling the decision-making process with the optimization process of the UL-MOP.
The preference surrogate model is constructed to replace the expensive evaluation process of the UL-MOP.
arXiv Detail & Related papers (2023-02-06T04:59:51Z) - Investigating Bi-Level Optimization for Learning and Vision from a
Unified Perspective: A Survey and Beyond [114.39616146985001]
In machine learning and computer vision fields, despite the different motivations and mechanisms, a lot of complex problems contain a series of closely related subproblms.
In this paper, we first uniformly express these complex learning and vision problems from the perspective of Bi-Level Optimization (BLO)
Then we construct a value-function-based single-level reformulation and establish a unified algorithmic framework to understand and formulate mainstream gradient-based BLO methodologies.
arXiv Detail & Related papers (2021-01-27T16:20:23Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z) - Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning [63.64636047748605]
We develop a new theoretical framework to provide convergence guarantee for the general multi-step MAML algorithm.
In particular, our results suggest that an inner-stage step needs to be chosen inversely proportional to $N$ of inner-stage steps in order for $N$ MAML to have guaranteed convergence.
arXiv Detail & Related papers (2020-02-18T19:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.