CloudShield: Real-time Anomaly Detection in the Cloud
- URL: http://arxiv.org/abs/2108.08977v1
- Date: Fri, 20 Aug 2021 03:14:18 GMT
- Title: CloudShield: Real-time Anomaly Detection in the Cloud
- Authors: Zecheng He, Ruby B. Lee
- Abstract summary: CloudShield is a real-time anomaly and attack detection system for cloud computing.
It distinguishes between benign programs, known attacks, and zero-day attacks.
It significantly reduces false alarms by up to 99.0%.
- Score: 8.406912571507569
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In cloud computing, it is desirable if suspicious activities can be detected
by automatic anomaly detection systems. Although anomaly detection has been
investigated in the past, it remains unsolved in cloud computing. Challenges
are: characterizing the normal behavior of a cloud server, distinguishing
between benign and malicious anomalies (attacks), and preventing alert fatigue
due to false alarms.
We propose CloudShield, a practical and generalizable real-time anomaly and
attack detection system for cloud computing. Cloudshield uses a general,
pretrained deep learning model with different cloud workloads, to predict the
normal behavior and provide real-time and continuous detection by examining the
model reconstruction error distributions. Once an anomaly is detected, to
reduce alert fatigue, CloudShield automatically distinguishes between benign
programs, known attacks, and zero-day attacks, by examining the prediction
error distributions. We evaluate the proposed CloudShield on representative
cloud benchmarks. Our evaluation shows that CloudShield, using model
pretraining, can apply to a wide scope of cloud workloads. Especially, we
observe that CloudShield can detect the recently proposed speculative execution
attacks, e.g., Spectre and Meltdown attacks, in milliseconds. Furthermore, we
show that CloudShield accurately differentiates and prioritizes known attacks,
and potential zero-day attacks, from benign programs. Thus, it significantly
reduces false alarms by up to 99.0%.
Related papers
- Why does Prediction Accuracy Decrease over Time? Uncertain Positive
Learning for Cloud Failure Prediction [35.058991707881646]
We find that the prediction accuracy may decrease by about 9% after retraining the models.
Considering that the mitigation actions may result in uncertain positive instances since they cannot be verified after mitigation, which may introduce more noise while updating the prediction model.
To tackle this problem, we design an Uncertain Positive Learning Risk Estimator (Uptake) approach.
arXiv Detail & Related papers (2024-01-08T03:13:09Z) - Overload: Latency Attacks on Object Detection for Edge Devices [47.9744734181236]
This paper investigates latency attacks on deep learning applications.
Unlike common adversarial attacks for misclassification, the goal of latency attacks is to increase the inference time.
We use object detection to demonstrate how such kind of attacks work.
arXiv Detail & Related papers (2023-04-11T17:24:31Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive
Benchmark Analysis and Beyond [85.06231315901505]
Rain removal aims to remove rain streaks from images/videos and reduce the disruptive effects caused by rain.
This paper makes the first attempt to conduct a comprehensive study on the robustness of deep learning-based rain removal methods against adversarial attacks.
arXiv Detail & Related papers (2022-03-31T10:22:24Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - Adversarial Attacks against a Satellite-borne Multispectral Cloud
Detector [33.11869627537352]
In this paper, we highlight the vulnerability of deep learning-based cloud detection towards adversarial attacks.
By optimising an adversarial pattern and superimposing it into a cloudless scene, we bias the neural network into detecting clouds in the scene.
This opens up the potential of multi-objective attacks, specifically, adversarial biasing in the cloud-sensitive bands and visual camouflage in the visible bands.
arXiv Detail & Related papers (2021-12-03T05:27:50Z) - Online Self-Evolving Anomaly Detection in Cloud Computing Environments [6.480575492140354]
We present a emphself-evolving anomaly detection (SEAD) framework for cloud dependability assurance.
Our framework self-evolves by exploring newly verified anomaly records and continuously updating the anomaly detector online.
Our detectors can achieve 88.94% in sensitivity and 94.60% on average, which makes them suitable for real-world deployment.
arXiv Detail & Related papers (2021-11-16T05:13:38Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Cloud detection machine learning algorithms for PROBA-V [6.950862982117125]
The objective of the algorithms presented in this paper is to detect clouds accurately providing a cloud flag per pixel.
The effectiveness of the proposed method is successfully illustrated using a large number of real Proba-V images.
arXiv Detail & Related papers (2020-12-09T18:23:59Z) - Thick Cloud Removal of Remote Sensing Images Using Temporal Smoothness
and Sparsity-Regularized Tensor Optimization [3.65794756599491]
In remote sensing images, the presence of thick cloud accompanying cloud shadow is a high probability event.
A novel thick cloud removal method for remote sensing images based on temporal smoothness and sparsity-regularized tensor optimization is proposed.
arXiv Detail & Related papers (2020-08-11T05:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.