Data-Dependent Bounds for Online Portfolio Selection Without
Lipschitzness and Smoothness
- URL: http://arxiv.org/abs/2305.13946v2
- Date: Sat, 4 Nov 2023 11:00:59 GMT
- Title: Data-Dependent Bounds for Online Portfolio Selection Without
Lipschitzness and Smoothness
- Authors: Chung-En Tsai and Ying-Ting Lin and Yen-Huan Li
- Abstract summary: We introduce the first instances of data-dependent bounds for online convex optimization with non-Lipschitz, non-smooth losses.
The algorithms we propose exhibit sublinear regret rates in the worst cases and achieve logarithmic regrets when the data is "easy"
- Score: 2.315156126698557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces the first small-loss and gradual-variation regret bounds
for online portfolio selection, marking the first instances of data-dependent
bounds for online convex optimization with non-Lipschitz, non-smooth losses.
The algorithms we propose exhibit sublinear regret rates in the worst cases and
achieve logarithmic regrets when the data is "easy," with per-iteration time
almost linear in the number of investment alternatives. The regret bounds are
derived using novel smoothness characterizations of the logarithmic loss, a
local norm-based analysis of following the regularized leader (FTRL) with
self-concordant regularizers, which are not necessarily barriers, and an
implicit variant of optimistic FTRL with the log-barrier.
Related papers
- Beyond Worst-Case Online Classification: VC-Based Regret Bounds for Relaxed Benchmarks [19.642496463491053]
We revisit online binary classification by shifting the focus from competing with the best-in-class binary loss to competing against relaxed benchmarks.
We consider comparing predictors that are robust to small input perturbations, perform well under Gaussian smoothing, or maintain a prescribed output margin.
Our algorithms achieve regret guarantees that depend only on the VC dimension and the complexity of the instance space.
arXiv Detail & Related papers (2025-04-14T18:00:23Z) - Refined Risk Bounds for Unbounded Losses via Transductive Priors [58.967816314671296]
We revisit the sequential variants of linear regression with the squared loss, classification problems with hinge loss, and logistic regression.
Our key tools are based on the exponential weights algorithm with carefully chosen transductive priors.
arXiv Detail & Related papers (2024-10-29T00:01:04Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Two-Stage ML-Guided Decision Rules for Sequential Decision Making under Uncertainty [55.06411438416805]
Sequential Decision Making under Uncertainty (SDMU) is ubiquitous in many domains such as energy, finance, and supply chains.
Some SDMU are naturally modeled as Multistage Problems (MSPs) but the resulting optimizations are notoriously challenging from a computational standpoint.
This paper introduces a novel approach Two-Stage General Decision Rules (TS-GDR) to generalize the policy space beyond linear functions.
The effectiveness of TS-GDR is demonstrated through an instantiation using Deep Recurrent Neural Networks named Two-Stage Deep Decision Rules (TS-LDR)
arXiv Detail & Related papers (2024-05-23T18:19:47Z) - Improving Adaptive Online Learning Using Refined Discretization [44.646191058243645]
We study unconstrained Online Linear Optimization with Lipschitz losses.
Motivated by the pursuit of instance optimality, we propose a new algorithm.
Central to these results is a continuous time approach to online learning.
arXiv Detail & Related papers (2023-09-27T21:54:52Z) - Exact identification of nonlinear dynamical systems by Trimmed Lasso [0.0]
Identification of nonlinear dynamical systems has been popularized by sparse identification of the nonlinear dynamics (SINDy) algorithm.
E-SINDy was proposed for model identification, handling finite, highly noisy data.
In this paper, we demonstrate that the Trimmed Lasso for robust identification of models (TRIM) can provide exact recovery under more severe noise, finite data, and multicollinearity as opposed to E-SINDy.
arXiv Detail & Related papers (2023-08-03T17:37:18Z) - Distributed Online Non-convex Optimization with Composite Regret [31.53784277195043]
We propose a novel composite regret with a new network regret for distributed online general bounds losses.
To our knowledge, is the first regret bound for distributed online nonlinear learning.
arXiv Detail & Related papers (2022-09-21T04:16:33Z) - Implicit Parameter-free Online Learning with Truncated Linear Models [51.71216912089413]
parameter-free algorithms are online learning algorithms that do not require setting learning rates.
We propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an "implicit" flavor.
Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties.
arXiv Detail & Related papers (2022-03-19T13:39:49Z) - Improving Generalization via Uncertainty Driven Perturbations [107.45752065285821]
We consider uncertainty-driven perturbations of the training data points.
Unlike loss-driven perturbations, uncertainty-guided perturbations do not cross the decision boundary.
We show that UDP is guaranteed to achieve the robustness margin decision on linear models.
arXiv Detail & Related papers (2022-02-11T16:22:08Z) - Dynamic Regret for Strongly Adaptive Methods and Optimality of Online
KRR [13.165557713537389]
We show that Strongly Adaptive (SA) algorithms can be viewed as a principled way of controlling dynamic regret.
We derive a new lower bound on a certain penalized regret which establishes the near minimax optimality of online Kernel Ridge Regression (KRR)
arXiv Detail & Related papers (2021-11-22T21:52:47Z) - Solving Multistage Stochastic Linear Programming via Regularized Linear
Decision Rules: An Application to Hydrothermal Dispatch Planning [77.34726150561087]
We propose a novel regularization scheme for linear decision rules (LDR) based on the AdaSO (adaptive least absolute shrinkage and selection operator)
Experiments show that the overfit threat is non-negligible when using the classical non-regularized LDR to solve MSLP.
For the LHDP problem, our analysis highlights the following benefits of the proposed framework in comparison to the non-regularized benchmark.
arXiv Detail & Related papers (2021-10-07T02:36:14Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.