Contrastive Learning-Based Dependency Modeling for Anomaly Detection in Cloud Services
- URL: http://arxiv.org/abs/2510.13368v1
- Date: Wed, 15 Oct 2025 09:59:16 GMT
- Title: Contrastive Learning-Based Dependency Modeling for Anomaly Detection in Cloud Services
- Authors: Yue Xing, Yingnan Deng, Heyao Liu, Ming Wang, Yun Zi, Xiaoxuan Sun,
- Abstract summary: This paper proposes a dependency modeling and anomaly detection method that integrates contrastive learning.<n>A contrastive learning framework is then introduced, constructing positive and negative sample pairs to enhance the separability of normal and abnormal patterns.<n>The proposed approach significantly outperforms existing methods on key metrics such as Precision, Recall, F1-Score, and AUC.
- Score: 6.382793463325052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the challenges of complex dependencies and diverse anomaly patterns in cloud service environments by proposing a dependency modeling and anomaly detection method that integrates contrastive learning. The method abstracts service interactions into a dependency graph, extracts temporal and structural features through embedding functions, and employs a graph convolution mechanism to aggregate neighborhood information for context-aware service representations. A contrastive learning framework is then introduced, constructing positive and negative sample pairs to enhance the separability of normal and abnormal patterns in the representation space. Furthermore, a temporal consistency constraint is designed to maintain representation stability across time steps and reduce the impact of short-term fluctuations and noise. The overall optimization combines contrastive loss and temporal consistency loss to ensure stable and reliable detection across multi-dimensional features. Experiments on public datasets systematically evaluate the method from hyperparameter, environmental, and data sensitivity perspectives. Results show that the proposed approach significantly outperforms existing methods on key metrics such as Precision, Recall, F1-Score, and AUC, while maintaining robustness under conditions of sparse labeling, monitoring noise, and traffic fluctuations. This study verifies the effectiveness of integrating dependency modeling with contrastive learning, provides a complete technical solution for cloud service anomaly detection, and demonstrates strong adaptability and stability in complex environments.
Related papers
- Adaptive Dual-Weighting Framework for Federated Learning via Out-of-Distribution Detection [53.45696787935487]
Federated Learning (FL) enables collaborative model training across large-scale distributed service nodes.<n>In real-world service-oriented deployments, data generated by heterogeneous users, devices, and application scenarios are inherently non-IID.<n>We propose FLood, a novel FL framework inspired by out-of-distribution (OOD) detection.
arXiv Detail & Related papers (2026-02-01T05:54:59Z) - Shared Representation Learning for High-Dimensional Multi-Task Forecasting under Resource Contention in Cloud-Native Backends [12.826922983028082]
This study proposes a unified forecasting framework for high-dimensional multi-task time series to meet the prediction demands of cloud native backend systems.<n>The method builds a shared encoding structure to represent diverse monitoring indicators in a unified manner.<n>A cross-task structural propagation module is introduced to model potential dependencies among nodes.
arXiv Detail & Related papers (2025-12-24T11:02:03Z) - Artificial Intelligence-Based Multiscale Temporal Modeling for Anomaly Detection in Cloud Services [10.421371572062595]
This study proposes an anomaly detection method based on the Transformer architecture with integrated multiscale feature perception.<n>The proposed method outperforms mainstream baseline models in key metrics, including precision, recall, AUC, and F1-score.
arXiv Detail & Related papers (2025-08-20T07:52:36Z) - Federated Anomaly Detection for Multi-Tenant Cloud Platforms with Personalized Modeling [6.028943403943345]
This paper proposes an anomaly detection method based on federated learning to address key challenges in multi-tenant cloud environments.<n>A global model is optimized, enabling cross-tenant collaborative anomaly detection while preserving data privacy.<n>Experiments use real telemetry data from a cloud platform to construct a simulated multi-tenant environment.
arXiv Detail & Related papers (2025-08-14T00:46:24Z) - Graph Neural Network and Transformer Integration for Unsupervised System Anomaly Discovery [14.982273490507986]
This study proposes an unsupervised anomaly detection method for distributed backend service systems.<n>It addresses practical challenges such as complex structural dependencies, diverse behavioral evolution, and the absence of labeled data.<n>Results show that the proposed method outperforms existing models on several key metrics.
arXiv Detail & Related papers (2025-08-13T00:35:58Z) - Multiresolution Analysis and Statistical Thresholding on Dynamic Networks [49.09073800467438]
ANIE (Adaptive Network Intensity Estimation) is a multi-resolution framework designed to automatically identify the time scales at which network structure evolves.<n>We show that ANIE adapts to the appropriate time resolution and is able to capture sharp structural changes while remaining robust to noise.
arXiv Detail & Related papers (2025-06-01T22:55:55Z) - A Hybrid Framework for Statistical Feature Selection and Image-Based Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.<n>We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.<n>By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv Detail & Related papers (2024-12-11T22:12:21Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Learning Prompt-Enhanced Context Features for Weakly-Supervised Video
Anomaly Detection [37.99031842449251]
Video anomaly detection under weak supervision presents significant challenges.
We present a weakly supervised anomaly detection framework that focuses on efficient context modeling and enhanced semantic discriminability.
Our approach significantly improves the detection accuracy of certain anomaly sub-classes, underscoring its practical value and efficacy.
arXiv Detail & Related papers (2023-06-26T06:45:16Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.