Explaining RADAR features for detecting spoofing attacks in Connected
Autonomous Vehicles
- URL: http://arxiv.org/abs/2203.00150v1
- Date: Tue, 1 Mar 2022 00:11:46 GMT
- Title: Explaining RADAR features for detecting spoofing attacks in Connected
Autonomous Vehicles
- Authors: Nidhi Rastogi, Sara Rampazzi, Michael Clifford, Miriam Heller, Matthew
Bishop, Karl Levitt
- Abstract summary: Connected autonomous vehicles (CAVs) are anticipated to have built-in AI systems for defending against cyberattacks.
Machine learning (ML) models form the basis of many such AI systems.
We present a model that explains textitcertainty and textituncertainty in sensor input.
- Score: 2.8153045998456188
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Connected autonomous vehicles (CAVs) are anticipated to have built-in AI
systems for defending against cyberattacks. Machine learning (ML) models form
the basis of many such AI systems. These models are notorious for acting like
black boxes, transforming inputs into solutions with great accuracy, but no
explanations support their decisions. Explanations are needed to communicate
model performance, make decisions transparent, and establish trust in the
models with stakeholders. Explanations can also indicate when humans must take
control, for instance, when the ML model makes low confidence decisions or
offers multiple or ambiguous alternatives. Explanations also provide evidence
for post-incident forensic analysis. Research on explainable ML to security
problems is limited, and more so concerning CAVs. This paper surfaces a
critical yet under-researched sensor data \textit{uncertainty} problem for
training ML attack detection models, especially in highly mobile and
risk-averse platforms such as autonomous vehicles. We present a model that
explains \textit{certainty} and \textit{uncertainty} in sensor input -- a
missing characteristic in data collection. We hypothesize that model
explanation is inaccurate for a given system without explainable input data
quality. We estimate \textit{uncertainty} and mass functions for features in
radar sensor data and incorporate them into the training model through
experimental evaluation. The mass function allows the classifier to categorize
all spoofed inputs accurately with an incorrect class label.
Related papers
- Explainable AI for Comparative Analysis of Intrusion Detection Models [20.683181384051395]
This research analyzes various machine learning models to the tasks of binary and multi-class classification for intrusion detection from network traffic.
We trained all models to the accuracy of 90% on the UNSW-NB15 dataset.
We also discover that Random Forest provides the best performance in terms of accuracy, time efficiency and robustness.
arXiv Detail & Related papers (2024-06-14T03:11:01Z) - Explainable Fraud Detection with Deep Symbolic Classification [4.1205832766381985]
We present Deep Classification, an extension of the Deep Symbolic Regression framework to classification problems.
Because the functions are mathematical expressions that are in closed-form and concise, the model is inherently explainable both at the level of a single classification decision and the model's decision process.
An evaluation on the PaySim data set demonstrates competitive predictive performance with state-of-the-art models, while surpassing them in terms of explainability.
arXiv Detail & Related papers (2023-12-01T13:50:55Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models [1.8752655643513647]
XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access.
We propose a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings.
We show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks.
arXiv Detail & Related papers (2023-02-04T13:23:39Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - Utilizing XAI technique to improve autoencoder based model for computer
network anomaly detection with shapley additive explanation(SHAP) [0.0]
Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security.
Lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature.
XAI is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output.
arXiv Detail & Related papers (2021-12-14T09:42:04Z) - Attacking Open-domain Question Answering by Injecting Misinformation [116.25434773461465]
We study the risk of misinformation to Question Answering (QA) models by investigating the sensitivity of open-domain QA models to misinformation documents.
Experiments show that QA models are vulnerable to even small amounts of evidence contamination brought by misinformation.
We discuss the necessity of building a misinformation-aware QA system that integrates question-answering and misinformation detection.
arXiv Detail & Related papers (2021-10-15T01:55:18Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - Efficacy of Statistical and Artificial Intelligence-based False
Information Cyberattack Detection Models for Connected Vehicles [4.058429227214047]
Connected vehicles (CVs) are vulnerable to cyberattacks that can instantly compromise the safety of the vehicle itself and other connected vehicles and roadway infrastructure.
In this paper, we have evaluated three change point-based statistical models for cyberattack detection in the CV data.
We have used six AI models to detect false information attacks and compared the performance for detecting the attacks with our developed change point models.
arXiv Detail & Related papers (2021-08-02T18:50:12Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Probing Model Signal-Awareness via Prediction-Preserving Input
Minimization [67.62847721118142]
We evaluate models' ability to capture the correct vulnerability signals to produce their predictions.
We measure the signal awareness of models using a new metric we propose- Signal-aware Recall (SAR)
The results show a sharp drop in the model's Recall from the high 90s to sub-60s with the new metric.
arXiv Detail & Related papers (2020-11-25T20:05:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.