Anticipated Network Surveillance -- An extrapolated study to predict
cyber-attacks using Machine Learning and Data Analytics
- URL: http://arxiv.org/abs/2312.17270v1
- Date: Wed, 27 Dec 2023 01:09:11 GMT
- Title: Anticipated Network Surveillance -- An extrapolated study to predict
cyber-attacks using Machine Learning and Data Analytics
- Authors: Aviral Srivastava, Dhyan Thakkar, Dr. Sharda Valiveti, Dr. Pooja Shah
and Dr. Gaurang Raval
- Abstract summary: This paper discusses a novel technique to predict an upcoming attack in a network based on several data parameters.
The proposed model comprises dataset pre-processing, and training, followed by the testing phase.
Based on the results of the testing phase, the best model is selected using which, event class which may lead to an attack is extracted.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning and data mining techniques are utiized for enhancement of
the security of any network. Researchers used machine learning for pattern
detection, anomaly detection, dynamic policy setting, etc. The methods allow
the program to learn from data and make decisions without human intervention,
consuming a huge training period and computation power. This paper discusses a
novel technique to predict an upcoming attack in a network based on several
data parameters. The dataset is continuous in real-time implementation. The
proposed model comprises dataset pre-processing, and training, followed by the
testing phase. Based on the results of the testing phase, the best model is
selected using which, event class which may lead to an attack is extracted. The
event statistics are used for attack
Related papers
- IT Intrusion Detection Using Statistical Learning and Testbed
Measurements [8.493936898320673]
We study automated intrusion detection in an IT infrastructure, specifically the problem of identifying the start of an attack.
We apply statistical learning methods, including Hidden Markov Model (HMM), Long Short-Term Memory (LSTM), and Random Forest (RFC)
We find that both HMM and LSTM can be effective in predicting attack start time, attack type, and attack actions.
arXiv Detail & Related papers (2024-02-20T15:25:56Z) - An Approach to Abstract Multi-stage Cyberattack Data Generation for ML-Based IDS in Smart Grids [2.5655761752240505]
We propose a method to generate synthetic data using a graph-based approach for training machine learning models in smart grids.
We use an abstract form of multi-stage cyberattacks defined via graph formulations and simulate the propagation behavior of attacks in the network.
arXiv Detail & Related papers (2023-12-21T11:07:51Z) - IoTGeM: Generalizable Models for Behaviour-Based IoT Attack Detection [3.3772986620114387]
We present an approach for modelling IoT network attacks that focuses on generalizability, yet also leads to better detection and performance.
First, we present an improved rolling window approach for feature extraction, and introduce a multi-step feature selection process that reduces overfitting.
Second, we build and test models using isolated train and test datasets, thereby avoiding common data leaks.
Third, we rigorously evaluate our methodology using a diverse portfolio of machine learning models, evaluation metrics and datasets.
arXiv Detail & Related papers (2023-10-17T21:46:43Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Predicting Seriousness of Injury in a Traffic Accident: A New Imbalanced
Dataset and Benchmark [62.997667081978825]
The paper introduces a new dataset to assess the performance of machine learning algorithms in the prediction of the seriousness of injury in a traffic accident.
The dataset is created by aggregating publicly available datasets from the UK Department for Transport.
arXiv Detail & Related papers (2022-05-20T21:15:26Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Enhanced Membership Inference Attacks against Machine Learning Models [9.26208227402571]
Membership inference attacks are used to quantify the private information that a model leaks about the individual data points in its training set.
We derive new attack algorithms that can achieve a high AUC score while also highlighting the different factors that affect their performance.
Our algorithms capture a very precise approximation of privacy loss in models, and can be used as a tool to perform an accurate and informed estimation of privacy risk in machine learning models.
arXiv Detail & Related papers (2021-11-18T13:31:22Z) - Gradient-based Data Subversion Attack Against Binary Classifiers [9.414651358362391]
In this work, we focus on label contamination attack in which an attacker poisons the labels of data to compromise the functionality of the system.
We exploit the gradients of a differentiable convex loss function with respect to the predicted label as a warm-start and formulate different strategies to find a set of data instances to contaminate.
Our experiments show that the proposed approach outperforms the baselines and is computationally efficient.
arXiv Detail & Related papers (2021-05-31T09:04:32Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.