To See or Not to See -- Fingerprinting Devices in Adversarial Environments Amid Advanced Machine Learning
- URL: http://arxiv.org/abs/2504.08264v1
- Date: Fri, 11 Apr 2025 05:40:47 GMT
- Title: To See or Not to See -- Fingerprinting Devices in Adversarial Environments Amid Advanced Machine Learning
- Authors: Justin Feng, Nader Sehatbakhsh,
- Abstract summary: Device fingerprinting is often employed to authenticate devices, detect adversaries, and identify eavesdroppers in an environment.<n>This requires the ability to discern between legitimate and malicious devices.<n>We propose a generic, yet simplified, model for device fingerprinting.
- Score: 0.725130576615102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing use of the Internet of Things raises security concerns. To address this, device fingerprinting is often employed to authenticate devices, detect adversaries, and identify eavesdroppers in an environment. This requires the ability to discern between legitimate and malicious devices which is achieved by analyzing the unique physical and/or operational characteristics of IoT devices. In the era of the latest progress in machine learning, particularly generative models, it is crucial to methodically examine the current studies in device fingerprinting. This involves explaining their approaches and underscoring their limitations when faced with adversaries armed with these ML tools. To systematically analyze existing methods, we propose a generic, yet simplified, model for device fingerprinting. Additionally, we thoroughly investigate existing methods to authenticate devices and detect eavesdropping, using our proposed model. We further study trends and similarities between works in authentication and eavesdropping detection and present the existing threats and attacks in these domains. Finally, we discuss future directions in fingerprinting based on these trends to develop more secure IoT fingerprinting schemes.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Locality Sensitive Hashing for Network Traffic Fingerprinting [5.062312533373298]
We use locality-sensitive hashing (LSH) for network traffic fingerprinting.
Our method increases the accuracy of state-of-the-art by 12% achieving around 94% accuracy in identifying devices in a network.
arXiv Detail & Related papers (2024-02-12T21:14:37Z) - An Intelligent Mechanism for Monitoring and Detecting Intrusions in IoT
Devices [0.7219077740523682]
This work proposes a Host-based Intrusion Detection Systems that leverages Federated Learning and Multi-Layer Perceptron neural networks to detected cyberattacks on IoT devices with high accuracy and enhancing data privacy protection.
arXiv Detail & Related papers (2023-06-23T11:26:00Z) - CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an
In-Vehicle CAN Bus Based on Deep Features of Voltage Signals [48.813942331065206]
We propose a security hardening system for in-vehicle networks.
The proposed system includes two mechanisms that process deep features extracted from voltage signals measured on the CAN bus.
arXiv Detail & Related papers (2021-06-15T06:12:33Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Machine Learning for the Detection and Identification of Internet of
Things (IoT) Devices: A Survey [16.3730669259576]
The Internet of Things (IoT) is becoming an indispensable part of everyday life, enabling a variety of emerging services and applications.
The first step in securing the IoT is detecting rogue IoT devices and identifying legitimate ones.
We classify the IoT device identification and detection into four categories: device-specific pattern recognition, Deep Learning enabled device identification, unsupervised device identification, and abnormal device detection.
arXiv Detail & Related papers (2021-01-25T15:51:04Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Adversarial Attacks on Deep Learning Systems for User Identification
based on Motion Sensors [24.182791316595576]
This study focuses on deep learning methods for explicit authentication based on motion sensor signals.
In this scenario, attackers could craft adversarial examples with the aim of gaining unauthorized access.
To our knowledge, this is the first study that aims at quantifying the impact of adversarial attacks on machine learning models.
arXiv Detail & Related papers (2020-09-02T14:35:05Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - Survey of Network Intrusion Detection Methods from the Perspective of
the Knowledge Discovery in Databases Process [63.75363908696257]
We review the methods that have been applied to network data with the purpose of developing an intrusion detector.
We discuss the techniques used for the capture, preparation and transformation of the data, as well as, the data mining and evaluation methods.
As a result of this literature review, we investigate some open issues which will need to be considered for further research in the area of network security.
arXiv Detail & Related papers (2020-01-27T11:21:05Z) - An Overview of Fingerprint-Based Authentication: Liveness Detection and
Beyond [0.0]
We focus on methods to detect physical liveness, defined as techniques that can be used to ensure that a living human user is attempting to authenticate on a system.
We analyze how effective these methods are at preventing attacks where a malicious entity tries to trick a fingerprint-based authentication system to accept a fake finger as a real one.
arXiv Detail & Related papers (2020-01-24T20:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.