Online Learning Approach for Survival Analysis
- URL: http://arxiv.org/abs/2402.05145v1
- Date: Wed, 7 Feb 2024 08:15:30 GMT
- Title: Online Learning Approach for Survival Analysis
- Authors: Camila Fernandez (LPSM), Pierre Gaillard (Thoth), Joseph de Vilmarest,
Olivier Wintenberger (LPSM (UMR\_8001))
- Abstract summary: We introduce an online mathematical framework for survival analysis, allowing real time adaptation to dynamic environments and censored data.
This framework enables the estimation of event time distributions through an optimal second order online convex optimization algorithm-Online Newton Step (ONS)
- Score: 1.0499611180329806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an online mathematical framework for survival analysis, allowing
real time adaptation to dynamic environments and censored data. This framework
enables the estimation of event time distributions through an optimal second
order online convex optimization algorithm-Online Newton Step (ONS). This
approach, previously unexplored, presents substantial advantages, including
explicit algorithms with non-asymptotic convergence guarantees. Moreover, we
analyze the selection of ONS hyperparameters, which depends on the
exp-concavity property and has a significant influence on the regret bound. We
propose a stochastic approach that guarantees logarithmic stochastic regret for
ONS. Additionally, we introduce an adaptive aggregation method that ensures
robustness in hyperparameter selection while maintaining fast regret bounds.
The findings of this paper can extend beyond the survival analysis field, and
are relevant for any case characterized by poor exp-concavity and unstable ONS.
Finally, these assertions are illustrated by simulation experiments.
Related papers
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Asymptotic and Non-Asymptotic Convergence Analysis of AdaGrad for Non-Convex Optimization via Novel Stopping Time-based Analysis [17.34603953600226]
Adaptives have emerged as powerful tools in deep learning, dynamically adjusting the learning rate based on gradient.
These methods have significantly succeeded in various deep learning tasks, but AdaGrad is the cornerstone of this work.
arXiv Detail & Related papers (2024-09-08T08:29:51Z) - Nonparametric Instrumental Variable Regression through Stochastic Approximate Gradients [0.3277163122167434]
We show how to formulate a functional gradient descent algorithm to tackle NPIV regression by directly minimizing the populational risk.
We provide theoretical support in the form of bounds on the excess risk, and conduct numerical experiments showcasing our method's superior stability and competitive performance.
This algorithm enables flexible estimator choices, such as neural networks or kernel based methods, as well as non-quadratic loss functions.
arXiv Detail & Related papers (2024-02-08T12:50:38Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Optimal Rates for Random Order Online Optimization [60.011653053877126]
We study the citetgarber 2020online, where the loss functions may be chosen by an adversary, but are then presented online in a uniformly random order.
We show that citetgarber 2020online algorithms achieve the optimal bounds and significantly improve their stability.
arXiv Detail & Related papers (2021-06-29T09:48:46Z) - Finite Sample Analysis of Minimax Offline Reinforcement Learning:
Completeness, Fast Rates and First-Order Efficiency [83.02999769628593]
We offer a theoretical characterization of off-policy evaluation (OPE) in reinforcement learning.
We show that the minimax approach enables us to achieve a fast rate of convergence for weights and quality functions.
We present the first finite-sample result with first-order efficiency in non-tabular environments.
arXiv Detail & Related papers (2021-02-05T03:20:39Z) - On The Verification of Neural ODEs with Stochastic Guarantees [14.490826225393096]
We show that Neural ODEs, an emerging class of timecontinuous neural networks, can be verified by solving a set of global-optimization problems.
We introduce Lagran Reachability ( SLR), an abstraction-based technique for constructing a tight Reachtube.
arXiv Detail & Related papers (2020-12-16T11:04:34Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Convergence rates and approximation results for SGD and its
continuous-time counterpart [16.70533901524849]
This paper proposes a thorough theoretical analysis of convex Gradient Descent (SGD) with non-increasing step sizes.
First, we show that the SGD can be provably approximated by solutions of inhomogeneous Differential Equation (SDE) using coupling.
Recent analyses of deterministic and optimization methods by their continuous counterpart, we study the long-time behavior of the continuous processes at hand and non-asymptotic bounds.
arXiv Detail & Related papers (2020-04-08T18:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.