Deep Neural Network Based Accelerated Failure Time Models using Rank
Loss
- URL: http://arxiv.org/abs/2206.05974v1
- Date: Mon, 13 Jun 2022 08:38:18 GMT
- Title: Deep Neural Network Based Accelerated Failure Time Models using Rank
Loss
- Authors: Gwangsu Kim and Sangwook Kang
- Abstract summary: An accelerated failure time (AFT) model assumes a log-linear relationship between failure times and a set of covariates.
Deep neural networks (DNNs) have received a focal attention over the past decades and have achieved remarkable success in a variety of fields.
We propose to apply DNNs in fitting AFT models using a Gehan-type loss, combined with a sub-sampling technique.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An accelerated failure time (AFT) model assumes a log-linear relationship
between failure times and a set of covariates. In contrast to other popular
survival models that work on hazard functions, the effects of covariates are
directly on failure times, whose interpretation is intuitive. The
semiparametric AFT model that does not specify the error distribution is
flexible and robust to departures from the distributional assumption. Owing to
the desirable features, this class of models has been considered as a promising
alternative to the popular Cox model in the analysis of censored failure time
data. However, in these AFT models, a linear predictor for the mean is
typically assumed. Little research has addressed the nonlinearity of predictors
when modeling the mean. Deep neural networks (DNNs) have received a focal
attention over the past decades and have achieved remarkable success in a
variety of fields. DNNs have a number of notable advantages and have been shown
to be particularly useful in addressing the nonlinearity. By taking advantage
of this, we propose to apply DNNs in fitting AFT models using a Gehan-type
loss, combined with a sub-sampling technique. Finite sample properties of the
proposed DNN and rank based AFT model (DeepR-AFT) are investigated via an
extensive stimulation study. DeepR-AFT shows a superior performance over its
parametric or semiparametric counterparts when the predictor is nonlinear. For
linear predictors, DeepR-AFT performs better when the dimensions of covariates
are large. The proposed DeepR-AFT is illustrated using two real datasets, which
demonstrates its superiority.
Related papers
- Deep Limit Model-free Prediction in Regression [0.0]
We provide a Model-free approach based on Deep Neural Network (DNN) to accomplish point prediction and prediction interval under a general regression setting.
Our method is more stable and accurate compared to other DNN-based counterparts, especially for optimal point predictions.
arXiv Detail & Related papers (2024-08-18T16:37:53Z) - Towards Flexible Time-to-event Modeling: Optimizing Neural Networks via
Rank Regression [17.684526928033065]
We introduce the Deep AFT Rank-regression model for Time-to-event prediction (DART)
This model uses an objective function based on Gehan's rank statistic, which is efficient and reliable for representation learning.
The proposed method is a semiparametric approach to AFT modeling that does not impose any distributional assumptions on the survival time distribution.
arXiv Detail & Related papers (2023-07-16T13:58:28Z) - A Momentum-Incorporated Non-Negative Latent Factorization of Tensors
Model for Dynamic Network Representation [0.0]
A large-scale dynamic network (LDN) is a source of data in many big data-related applications.
A Latent factorization of tensors (LFT) model efficiently extracts this time pattern.
LFT models based on gradient descent (SGD) solvers are often limited by training schemes and have poor tail convergence.
This paper proposes a novel nonlinear LFT model (MNNL) based on momentum-ind SGD to make training unconstrained and compatible with general training schemes.
arXiv Detail & Related papers (2023-05-04T12:30:53Z) - Adaptive deep learning for nonlinear time series models [0.0]
We develop a theory for adaptive nonparametric estimation of the mean function of a non-stationary and nonlinear time series model using deep neural networks (DNNs)
We derive minimax lower bounds for estimating mean functions belonging to a wide class of nonlinear autoregressive (AR) models.
arXiv Detail & Related papers (2022-07-06T09:58:58Z) - Probabilistic model-error assessment of deep learning proxies: an
application to real-time inversion of borehole electromagnetic measurements [0.0]
We study the effects of the approximate nature of the deep learned models and associated model errors during the inversion of extra-deep borehole electromagnetic (EM) measurements.
Using a deep neural network (DNN) as a forward model allows us to perform thousands of model evaluations within seconds.
We present numerical results highlighting the challenges associated with the inversion of EM measurements while neglecting model error.
arXiv Detail & Related papers (2022-05-25T11:44:48Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Optimization Variance: Exploring Generalization Properties of DNNs [83.78477167211315]
The test error of a deep neural network (DNN) often demonstrates double descent.
We propose a novel metric, optimization variance (OV), to measure the diversity of model updates.
arXiv Detail & Related papers (2021-06-03T09:34:17Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.