Multi-objective Portfolio Optimization Via Gradient Descent
- URL: http://arxiv.org/abs/2507.16717v1
- Date: Tue, 22 Jul 2025 15:55:00 GMT
- Title: Multi-objective Portfolio Optimization Via Gradient Descent
- Authors: Christian Oliva, Pedro R. Ventura, Luis F. Lago-Fernández,
- Abstract summary: We introduce a benchmark framework for multi-objective portfolio optimization (MPO) using gradient descent with automatic differentiation.<n>We have evaluated our framework across six experimental scenarios, from single-objective setups to complex multi-objective cases.<n>Our results show that our method achieves competitive performance while offering enhanced flexibility for modeling multiple objectives and constraints.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional approaches to portfolio optimization, often rooted in Modern Portfolio Theory and solved via quadratic programming or evolutionary algorithms, struggle with scalability or flexibility, especially in scenarios involving complex constraints, large datasets and/or multiple conflicting objectives. To address these challenges, we introduce a benchmark framework for multi-objective portfolio optimization (MPO) using gradient descent with automatic differentiation. Our method supports any optimization objective, such as minimizing risk measures (e.g., CVaR) or maximizing Sharpe ratio, along with realistic constraints, such as tracking error limits, UCITS regulations, or asset group restrictions. We have evaluated our framework across six experimental scenarios, from single-objective setups to complex multi-objective cases, and have compared its performance against standard solvers like CVXPY and SKFOLIO. Our results show that our method achieves competitive performance while offering enhanced flexibility for modeling multiple objectives and constraints. We aim to provide a practical and extensible tool for researchers and practitioners exploring advanced portfolio optimization problems in real-world conditions.
Related papers
- Preference-based Multi-Objective Reinforcement Learning [5.031225669460861]
This paper introduces preference-based MORL (Pb-MORL), which formalizes the integration of preferences into the MORL framework.<n>To guide policy optimization using preferences, our method constructs a multi-objective reward model that aligns with the given preferences.<n>Experiments in benchmark multi-objective tasks, a multi-energy management task, and an autonomous driving task on a multi-line highway show that our method performs competitively.
arXiv Detail & Related papers (2025-07-18T16:43:04Z) - Continual Optimization with Symmetry Teleportation for Multi-Task Learning [73.28772872740744]
Multi-task learning (MTL) enables the simultaneous learning of multiple tasks using a single model.<n>We propose a novel approach based on Continual Optimization with Symmetry Teleportation (COST)<n>COST seeks an alternative loss-equivalent point on the loss landscape to reduce conflict gradients.
arXiv Detail & Related papers (2025-03-06T02:58:09Z) - Few-shot Steerable Alignment: Adapting Rewards and LLM Policies with Neural Processes [50.544186914115045]
Large language models (LLMs) are increasingly embedded in everyday applications.<n> Ensuring their alignment with the diverse preferences of individual users has become a critical challenge.<n>We present a novel framework for few-shot steerable alignment.
arXiv Detail & Related papers (2024-12-18T16:14:59Z) - Constrained Multi-objective Bayesian Optimization through Optimistic Constraints Estimation [10.77641869521259]
We propose a novel constrained multi-objective Bayesian optimization algorithm COMBOO that balances active learning of the level-set defined on multiple unknowns with multi-objective optimization within the feasible region.<n>We provide both theoretical analysis and empirical evidence, demonstrating the efficacy of our approach on various synthetic benchmarks and real-world applications.
arXiv Detail & Related papers (2024-11-06T03:38:00Z) - Optimization-Driven Adaptive Experimentation [7.948144726705323]
Real-world experiments involve batched & delayed feedback, non-stationarity, multiple objectives & constraints, and (often some) personalization.
Tailoring adaptive methods to address these challenges on a per-problem basis is infeasible, and static designs remain the de facto standard.
We present a mathematical programming formulation that can flexibly incorporate a wide range of objectives, constraints, and statistical procedures.
arXiv Detail & Related papers (2024-08-08T16:29:09Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment [46.44464839353993]
We introduce Rewards-in-Context (RiC), which conditions the response of a foundation model on multiple rewards in its prompt context.
RiC only requires supervised fine-tuning of a single foundation model and supports dynamic adjustment for user preferences during inference time.
arXiv Detail & Related papers (2024-02-15T18:58:31Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Slimmable Domain Adaptation [112.19652651687402]
We introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank.
Our framework surpasses other competing approaches by a very large margin on multiple benchmarks.
arXiv Detail & Related papers (2022-06-14T06:28:04Z) - An Instance Space Analysis of Constrained Multi-Objective Optimization
Problems [1.314903445595385]
We explore the relationship between constrained multi-objective evolutionary algorithms (CMOEAs) performance and CMOP instances characteristics using Instance Space Analysis (ISA)
Detailed evaluation of problem-algorithm footprints spanning six CMOP benchmark suites and fifteen CMOEAs is presented.
We conclude that two key characteristics, the isolation of non-dominate set and the correlation between constraints and objectives evolvability, have the greatest impact on algorithm performance.
arXiv Detail & Related papers (2022-03-02T04:28:11Z) - Multi-Objective Constrained Optimization for Energy Applications via
Tree Ensembles [55.23285485923913]
Energy systems optimization problems are complex due to strongly non-linear system behavior and multiple competing objectives.
In some cases, proposed optimal solutions need to obey explicit input constraints related to physical properties or safety-critical operating conditions.
This paper proposes a novel data-driven strategy using tree ensembles for constrained multi-objective optimization of black-box problems.
arXiv Detail & Related papers (2021-11-04T20:18:55Z) - Empirical Study on the Benefits of Multiobjectivization for Solving
Single-Objective Problems [0.0]
Local optima are often preventing algorithms from making progress and thus pose a severe threat.
With the use of a sophisticated visualization technique based on the multi-objective gradients, the properties of the arising multi-objective landscapes are illustrated and examined.
We will empirically show that the multi-objective COCO MOGSA is able to exploit these properties to overcome local traps.
arXiv Detail & Related papers (2020-06-25T14:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.