Survival Analysis with Adversarial Regularization
- URL: http://arxiv.org/abs/2312.16019v5
- Date: Fri, 05 Sep 2025 13:07:46 GMT
- Title: Survival Analysis with Adversarial Regularization
- Authors: Michael Potter, Stefano Maxenti, Michael Everett,
- Abstract summary: Survival Analysis (SA) models the time until an event occurs, with applications in fields like medicine, defense, finance, and aerospace.<n>Recent research indicates that Neural Networks (NNs) can effectively capture complex data patterns in SA.<n>We leverage advances in NN verification to develop training objectives for robust, fully-parametric SA models.
- Score: 3.9686445409447617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Survival Analysis (SA) models the time until an event occurs, with applications in fields like medicine, defense, finance, and aerospace. Recent research indicates that Neural Networks (NNs) can effectively capture complex data patterns in SA, whereas simple generalized linear models often fall short in this regard. However, dataset uncertainties (e.g., noisy measurements, human error) can degrade NN model performance. To address this, we leverage advances in NN verification to develop training objectives for robust, fully-parametric SA models. Specifically, we propose an adversarially robust loss function based on a Min-Max optimization problem. We employ CROWN-Interval Bound Propagation (CROWN-IBP) to tackle the computational challenges inherent in solving this Min-Max problem. Evaluated over 10 SurvSet datasets, our method, Survival Analysis with Adversarial Regularization (SAWAR), consistently outperforms baseline adversarial training methods and state-of-the-art (SOTA) deep SA models across various covariate perturbations with respect to Negative Log Likelihood (NegLL), Integrated Brier Score (IBS), and Concordance Index (CI) metrics. Thus, we demonstrate that adversarial robustness enhances SA predictive performance and calibration, mitigating data uncertainty and improving generalization across diverse datasets by up to 150% compared to baselines.
Related papers
- Subject-Adaptive Sparse Linear Models for Interpretable Personalized Health Prediction from Multimodal Lifelog Data [18.017666750186336]
SASL is an interpretable modeling approach explicitly designed for personalized health prediction.<n>We develop a regression-then-thresholding approach specifically designed to maximize macro-averaged F1 scores for ordinal targets.<n>For intrinsically challenging predictions, SASL selectively incorporates outputs from compact LightGBM models through confidence-based gating.
arXiv Detail & Related papers (2025-10-03T09:17:57Z) - RoHOI: Robustness Benchmark for Human-Object Interaction Detection [84.78366452133514]
Human-Object Interaction (HOI) detection is crucial for robot-human assistance, enabling context-aware support.<n>We introduce the first benchmark for HOI detection, evaluating model resilience under diverse challenges.<n>Our benchmark, RoHOI, includes 20 corruption types based on the HICO-DET and V-COCO datasets and a new robustness-focused metric.
arXiv Detail & Related papers (2025-07-12T01:58:04Z) - MIRRAMS: Towards Training Models Robust to Missingness Distribution Shifts [2.5357049657770516]
In real-world data analysis, missingness distributional shifts between training and test input datasets frequently occur.<n>We propose a novel deep learning framework designed to address such shifts in missingness distributions.<n>Our approach achieves state-of-the-art performance even without missing data and can be naturally extended to address semi-supervised learning tasks.
arXiv Detail & Related papers (2025-07-11T03:03:30Z) - Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse Autoencoders [50.52694757593443]
Existing SAE training algorithms often lack rigorous mathematical guarantees and suffer from practical limitations.<n>We first propose a novel statistical framework for the feature recovery problem, which includes a new notion of feature identifiability.<n>We introduce a new SAE training algorithm based on bias adaptation'', a technique that adaptively adjusts neural network bias parameters to ensure appropriate activation sparsity.
arXiv Detail & Related papers (2025-06-16T20:58:05Z) - Model-agnostic Mitigation Strategies of Data Imbalance for Regression [0.0]
Data imbalance persists as a pervasive challenge in regression tasks, introducing bias in model performance and undermining predictive reliability.<n>We present advanced mitigation techniques, which build upon and improve existing sampling methods.<n>We demonstrate that constructing an ensemble of models -- one trained with imbalance mitigation and another without -- can significantly reduce these negative effects.
arXiv Detail & Related papers (2025-06-02T09:46:08Z) - Interpretable Deep Regression Models with Interval-Censored Failure Time Data [1.2993568435938014]
Deep learning methods for interval-censored data remain underexplored and limited to specific data type or model.<n>This work proposes a general regression framework for interval-censored data with a broad class of partially linear transformation models.<n>Applying our method to the Alzheimer's Disease Neuroimaging Initiative dataset yields novel insights and improved predictive performance compared to traditional approaches.
arXiv Detail & Related papers (2025-03-25T15:27:32Z) - AdaPRL: Adaptive Pairwise Regression Learning with Uncertainty Estimation for Universal Regression Tasks [0.0]
We propose a novel adaptive pairwise learning framework for regression tasks (AdaPRL)
AdaPRL leverages the relative differences between data points and with deep probabilistic models to quantify the uncertainty associated with predictions.
Experiments show that AdaPRL can be seamlessly integrated into recently proposed regression frameworks to gain performance improvement.
arXiv Detail & Related papers (2025-01-10T09:19:10Z) - MIBP-Cert: Certified Training against Data Perturbations with Mixed-Integer Bilinear Programs [50.41998220099097]
Data errors, corruptions, and poisoning attacks during training pose a major threat to the reliability of modern AI systems.<n>We introduce MIBP-Cert, a novel certification method based on mixed-integer bilinear programming (MIBP)<n>By computing the set of parameters reachable through perturbed or manipulated data, we can predict all possible outcomes and guarantee robustness.
arXiv Detail & Related papers (2024-12-13T14:56:39Z) - On the KL-Divergence-based Robust Satisficing Model [2.425685918104288]
robustness satisficing framework has attracted increasing attention from academia.
We present analytical interpretations, diverse performance guarantees, efficient and stable numerical methods, convergence analysis, and an extension tailored for hierarchical data structures.
We demonstrate the superior performance of our model compared to state-of-the-art benchmarks.
arXiv Detail & Related papers (2024-08-17T10:05:05Z) - Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - Autoencoder based approach for the mitigation of spurious correlations [2.7624021966289605]
Spurious correlations refer to erroneous associations in data that do not reflect true underlying relationships.
These correlations can lead deep neural networks (DNNs) to learn patterns that are not robust across diverse datasets or real-world scenarios.
We propose an autoencoder-based approach to analyze the nature of spurious correlations that exist in the Global Wheat Head Detection (GWHD) 2021 dataset.
arXiv Detail & Related papers (2024-06-27T05:28:44Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Uncertainty-Aware Deep Attention Recurrent Neural Network for
Heterogeneous Time Series Imputation [0.25112747242081457]
Missingness is ubiquitous in multivariate time series and poses an obstacle to reliable downstream analysis.
We propose DEep Attention Recurrent Imputation (Imputation), which jointly estimates missing values and their associated uncertainty.
Experiments show that I surpasses the SOTA in diverse imputation tasks using real-world datasets.
arXiv Detail & Related papers (2024-01-04T13:21:11Z) - Composite Survival Analysis: Learning with Auxiliary Aggregated
Baselines and Survival Scores [0.0]
Survival Analysis (SA) constitutes the default method for time-to-event modeling.
We show how to improve the training and inference of SA models by decoupling their full expression into (1) an aggregated baseline hazard, which captures the overall behavior of a given population, and (2) independently distributed survival scores, which model idiosyncratic probabilistic dynamics of its given members, in a fully parametric setting.
arXiv Detail & Related papers (2023-12-10T11:13:22Z) - Measuring and Mitigating Local Instability in Deep Neural Networks [23.342675028217762]
We study how the predictions of a model change, even when it is retrained on the same data, as a consequence of principledity in the training process.
For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries.
We propose new data-centric methods that exploit our local stability estimates.
arXiv Detail & Related papers (2023-05-18T00:34:15Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - A Statistics and Deep Learning Hybrid Method for Multivariate Time
Series Forecasting and Mortality Modeling [0.0]
Exponential Smoothing Recurrent Neural Network (ES-RNN) is a hybrid between a statistical forecasting model and a recurrent neural network variant.
ES-RNN achieves a 9.4% improvement in absolute error in the Makridakis-4 Forecasting Competition.
arXiv Detail & Related papers (2021-12-16T04:44:19Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.