\b{eta}-Divergence-Based Latent Factorization of Tensors model for QoS
prediction
- URL: http://arxiv.org/abs/2208.06778v1
- Date: Sun, 14 Aug 2022 05:14:50 GMT
- Title: \b{eta}-Divergence-Based Latent Factorization of Tensors model for QoS
prediction
- Authors: Zemiao Peng, Hao Wu
- Abstract summary: A nonnegative latent factorization of tensors (NLFT) model can well model the temporal pattern hidden in nonnegative quality-of-service (QoS) data.
Existing NLFT models' objective function is based on Euclidean distance, which is only a special case of beta-divergence.
This paper proposes a beta-divergence-based NLFT model (beta-NLFT) to achieve prediction accuracy gain.
- Score: 2.744577504320494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A nonnegative latent factorization of tensors (NLFT) model can well model the
temporal pattern hidden in nonnegative quality-of-service (QoS) data for
predicting the unobserved ones with high accuracy. However, existing NLFT
models' objective function is based on Euclidean distance, which is only a
special case of \b{eta}-divergence. Hence, can we build a generalized NLFT
model via adopting \b{eta}-divergence to achieve prediction accuracy gain? To
tackle this issue, this paper proposes a \b{eta}-divergence-based NLFT model
(\b{eta}-NLFT). Its ideas are two-fold 1) building a learning objective with
\b{eta}-divergence to achieve higher prediction accuracy, and 2) implementing
self-adaptation of hyper-parameters to improve practicability. Empirical
studies on two dynamic QoS datasets demonstrate that compared with
state-of-the-art models, the proposed \b{eta}-NLFT model achieves the higher
prediction accuracy for unobserved QoS data.
Related papers
- Variational Graph Convolutional Neural Networks [72.67088029389764]
Uncertainty can help improve the explainability of Graph Convolutional Networks.<n>Uncertainty can also be used in critical applications to verify the results of the model.
arXiv Detail & Related papers (2025-07-02T13:28:37Z) - Multi-Head Self-Attending Neural Tucker Factorization [5.734615417239977]
We introduce a neural network-based tensor factorization approach tailored for learning representations of high-dimensional and incomplete (HDI) tensors.<n>The proposed MSNTucF model demonstrates superior performance compared to state-of-the-art benchmark models in estimating missing observations.
arXiv Detail & Related papers (2025-01-16T13:04:15Z) - Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - Correcting Model Bias with Sparse Implicit Processes [0.9187159782788579]
We show that Sparse Implicit Processes (SIP) is capable of correcting model bias when the data generating mechanism differs strongly from the one implied by the model.
We use synthetic datasets to show that SIP is capable of providing predictive distributions that reflect the data better than the exact predictions of the initial, but wrongly assumed model.
arXiv Detail & Related papers (2022-07-21T18:00:01Z) - An Interpretable Probabilistic Model for Short-Term Solar Power
Forecasting Using Natural Gradient Boosting [0.0]
We propose a two stage probabilistic forecasting framework able to generate highly accurate, reliable, and sharp forecasts.
The framework offers full transparency on both the point forecasts and the prediction intervals (PIs)
To highlight the performance and the applicability of the proposed framework, real data from two PV parks located in Southern Germany are employed.
arXiv Detail & Related papers (2021-08-05T12:59:38Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Latent Gaussian Model Boosting [0.0]
Tree-boosting shows excellent predictive accuracy on many data sets.
We obtain increased predictive accuracy compared to existing approaches in both simulated and real-world data experiments.
arXiv Detail & Related papers (2021-05-19T07:36:30Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Learning Consistent Deep Generative Models from Sparse Data via
Prediction Constraints [16.48824312904122]
We develop a new framework for learning variational autoencoders and other deep generative models.
We show that these two contributions -- prediction constraints and consistency constraints -- lead to promising image classification performance.
arXiv Detail & Related papers (2020-12-12T04:18:50Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.