CADeSH: Collaborative Anomaly Detection for Smart Homes
- URL: http://arxiv.org/abs/2303.01021v1
- Date: Thu, 2 Mar 2023 07:22:26 GMT
- Title: CADeSH: Collaborative Anomaly Detection for Smart Homes
- Authors: Yair Meidan, Dan Avraham, Hanan Libhaber, Asaf Shabtai
- Abstract summary: We propose a two-step collaborative anomaly detection method.
It first uses an autoencoder to differentiate frequent (benign') and infrequent (possibly malicious') traffic flows.
Clustering is then used to analyze only the infrequent flows and classify them as either known ('rare yet benign') or unknown (malicious')
- Score: 17.072108188004396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although home IoT (Internet of Things) devices are typically plain and task
oriented, the context of their daily use may affect their traffic patterns. For
this reason, anomaly-based intrusion detection systems tend to suffer from a
high false positive rate (FPR). To overcome this, we propose a two-step
collaborative anomaly detection method which first uses an autoencoder to
differentiate frequent (`benign') and infrequent (possibly `malicious') traffic
flows. Clustering is then used to analyze only the infrequent flows and
classify them as either known ('rare yet benign') or unknown (`malicious'). Our
method is collaborative, in that (1) normal behaviors are characterized more
robustly, as they take into account a variety of user interactions and network
topologies, and (2) several features are computed based on a pool of identical
devices rather than just the inspected device.
We evaluated our method empirically, using 21 days of real-world traffic data
that emanated from eight identical IoT devices deployed on various networks,
one of which was located in our controlled lab where we implemented two popular
IoT-related cyber-attacks. Our collaborative anomaly detection method achieved
a macro-average area under the precision-recall curve of 0.841, an F1 score of
0.929, and an FPR of only 0.014. These promising results were obtained by using
labeled traffic data from our lab as the test set, while training the models on
the traffic of devices deployed outside the lab, and thus demonstrate a high
level of generalizability. In addition to its high generalizability and
promising performance, our proposed method also offers benefits such as privacy
preservation, resource savings, and model poisoning mitigation. On top of that,
as a contribution to the scientific community, our novel dataset is available
online.
Related papers
- Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Exploring Highly Quantised Neural Networks for Intrusion Detection in
Automotive CAN [13.581341206178525]
Machine learning-based intrusion detection models have been shown to successfully detect multiple targeted attack vectors.
In this paper, we present a case for custom-quantised literature (CQMLP) as a multi-class classification model.
We show that the 2-bit CQMLP model, when integrated as the IDS, can detect malicious attack messages with a very high accuracy of 99.9%.
arXiv Detail & Related papers (2024-01-19T21:11:02Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Towards an Awareness of Time Series Anomaly Detection Models'
Adversarial Vulnerability [21.98595908296989]
We demonstrate that the performance of state-of-the-art anomaly detection methods is degraded substantially by adding only small adversarial perturbations to the sensor data.
We use different scoring metrics such as prediction errors, anomaly, and classification scores over several public and private datasets.
We demonstrate, for the first time, the vulnerabilities of anomaly detection systems against adversarial attacks.
arXiv Detail & Related papers (2022-08-24T01:55:50Z) - Training a Bidirectional GAN-based One-Class Classifier for Network
Intrusion Detection [8.158224495708978]
Existing generative adversarial networks (GANs) are primarily used for creating synthetic samples from reals.
In our proposed method, we construct the trained encoder-discriminator as a one-class classifier based on Bidirectional GAN (Bi-GAN)
Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks.
arXiv Detail & Related papers (2022-02-02T23:51:11Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - Real-World Anomaly Detection by using Digital Twin Systems and
Weakly-Supervised Learning [3.0100975935933567]
We present novel weakly-supervised approaches to anomaly detection for industrial settings.
The approaches make use of a Digital Twin to generate a training dataset which simulates the normal operation of the machinery.
The performance of the proposed methods is compared against various state-of-the-art anomaly detection algorithms on an application to a real-world dataset.
arXiv Detail & Related papers (2020-11-12T10:15:56Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z) - Learning to Detect Anomalous Wireless Links in IoT Networks [1.0017195276758455]
We introduce four types of wireless network anomalies that are identified at the link layer.
We study the performance of threshold- and machine learning (ML)-based classifiers to automatically detect these anomalies.
Our results demonstrate that; i) selected supervised approaches are able to detect anomalies with F1 scores of above 0.98, while unsupervised ones are also capable of detecting the said anomalies with F1 scores of, on average, 0.90, and ii) OC-SVM outperforms all the other unsupervised ML approaches reaching at F1 scores of 0.99 for SuddenD, 0.95 for SuddenR, 0.93 for InstaD
arXiv Detail & Related papers (2020-08-12T11:03:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.