An ADMM-Incorporated Latent Factorization of Tensors Method for QoS
Prediction
- URL: http://arxiv.org/abs/2212.01606v1
- Date: Sat, 3 Dec 2022 12:35:48 GMT
- Title: An ADMM-Incorporated Latent Factorization of Tensors Method for QoS
Prediction
- Authors: Jiajia Mi, Hao Wu
- Abstract summary: Quality of service (QoS) describes the performance of a web service dynamically with respect to the service requested by the service consumer.
Latent factorization of tenors (LFT) is very effective for discovering temporal patterns in high dimensional and sparse (HiDS) tensors.
Current LFT models suffer from a low convergence rate and rarely account for the effects of outliers.
- Score: 2.744577504320494
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As the Internet developed rapidly, it is important to choose suitable web
services from a wide range of candidates. Quality of service (QoS) describes
the performance of a web service dynamically with respect to the service
requested by the service consumer. Moreover, the latent factorization of tenors
(LFT) is very effective for discovering temporal patterns in high dimensional
and sparse (HiDS) tensors. However, current LFT models suffer from a low
convergence rate and rarely account for the effects of outliers. To address the
above problems, this paper proposes an Alternating direction method of
multipliers (ADMM)-based Outlier-Resilient Nonnegative Latent-factorization of
Tensors model. We maintain the non-negativity of the model by constructing an
augmented Lagrangian function with the ADMM optimization framework. In
addition, the Cauchy function is taken as the metric function to reduce the
impact on the model training. The empirical work on two dynamic QoS datasets
shows that the proposed method has faster convergence and better performance on
prediction accuracy.
Related papers
- DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs [70.91804882618243]
This paper proposes DSMoE, a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks.
We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge.
Experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches.
arXiv Detail & Related papers (2025-02-18T02:37:26Z) - Using Causality for Enhanced Prediction of Web Traffic Time Series [36.39678202395453]
We propose an effective neural network module, CCMPlus, designed to extract causal relationship features across services.
Our method surpasses state-of-the-art approaches in Mean Squared Error (MSE) and Mean Absolute Error (MAE) for predicting service traffic time series.
arXiv Detail & Related papers (2025-02-02T00:36:40Z) - Optimizing Sequential Recommendation Models with Scaling Laws and Approximate Entropy [104.48511402784763]
Performance Law for SR models aims to theoretically investigate and model the relationship between model performance and data quality.
We propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics.
arXiv Detail & Related papers (2024-11-30T10:56:30Z) - Zero-Shot Embeddings Inform Learning and Forgetting with Vision-Language Encoders [6.7181844004432385]
The Inter-Intra Modal Measure (IIMM) functions as a strong predictor of performance changes with fine-tuning.
Fine-tuning on tasks with higher IIMM scores produces greater in-domain performance gains but also induces more severe out-of-domain performance degradation.
With only a single forward pass of the target data, practitioners can leverage this key insight to evaluate the degree to which a model can be expected to improve following fine-tuning.
arXiv Detail & Related papers (2024-07-22T15:35:09Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Augmenting Unsupervised Reinforcement Learning with Self-Reference [63.68018737038331]
Humans possess the ability to draw on past experiences explicitly when learning new tasks.
We propose the Self-Reference (SR) approach, an add-on module explicitly designed to leverage historical information.
Our approach achieves state-of-the-art results in terms of Interquartile Mean (IQM) performance and Optimality Gap reduction on the Unsupervised Reinforcement Learning Benchmark.
arXiv Detail & Related papers (2023-11-16T09:07:34Z) - CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain
Performance and Calibration [59.48235003469116]
We show that data augmentation consistently enhances OOD performance.
We also show that CF augmented models which are easier to calibrate also exhibit much lower entropy when assigning importance.
arXiv Detail & Related papers (2023-09-14T16:16:40Z) - Switching Autoregressive Low-rank Tensor Models [12.461139675114818]
We show how to switch autoregressive low-rank tensor (SALT) models.
SALT parameterizes the tensor of an ARHMM with a low-rank factorization to control the number of parameters.
We prove theoretical and discuss practical connections between SALT, linear dynamical systems, and SLDSs.
arXiv Detail & Related papers (2023-06-05T22:25:28Z) - TPMCF: Temporal QoS Prediction using Multi-Source Collaborative Features [0.5161531917413706]
Temporal Prediction is essential to identify a suitable service over time.
Recent methods hardly achieved desired accuracy due to various limitations.
This paper proposes a scalable strategy for Temporal Prediction using Multi-source Collaborative-Features.
arXiv Detail & Related papers (2023-03-30T06:49:53Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal
Difference and Successor Representation [32.80370188601152]
The paper proposes the Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD) framework and its Successor Representation-based variant, referred to as the MAK-SR.
The proposed MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated with high dimensional multi-agent environments.
arXiv Detail & Related papers (2021-12-30T18:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.