Fingerprint Theft Using Smart Padlocks: Droplock Exploits and Defenses
- URL: http://arxiv.org/abs/2407.21398v1
- Date: Wed, 31 Jul 2024 07:40:05 GMT
- Title: Fingerprint Theft Using Smart Padlocks: Droplock Exploits and Defenses
- Authors: Steve Kerrison,
- Abstract summary: A lack of attention to device security and user-awareness beyond the primary function of these IoT devices may be exposing users to invisible risks.
This paper extends upon prior work that defined the "droplock", an attack whereby a smart lock is turned into a wireless fingerprint harvester.
We perform a more in-depth analysis of a broader range of vulnerabilities and exploits that make a droplock attack easier to perform and harder to detect.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: There is growing adoption of smart devices such as digital locks with remote control and sophisticated authentication mechanisms. However, a lack of attention to device security and user-awareness beyond the primary function of these IoT devices may be exposing users to invisible risks. This paper extends upon prior work that defined the "droplock", an attack whereby a smart lock is turned into a wireless fingerprint harvester. We perform a more in-depth analysis of a broader range of vulnerabilities and exploits that make a droplock attack easier to perform and harder to detect. Analysis is extended to a range of other smart lock models, and a threat model is used as the basis to recommend stronger security controls that may mitigate the risks of such as attack.
Related papers
- The Impact of Logic Locking on Confidentiality: An Automated Evaluation [10.116593996661756]
We show that a single malicious logic locking key can expose over 70% of an encryption key.
This research uncovers a significant security vulnerability in logic locking.
arXiv Detail & Related papers (2025-02-03T11:01:11Z) - Cute-Lock: Behavioral and Structural Multi-Key Logic Locking Using Time Base Keys [1.104960878651584]
We propose, implement and evaluate a family of secure multi-key logic locking algorithms called Cute-Lock.
Our experimental results under a diverse range of attacks confirm that, compared to vulnerable state-of-the-art methods, employing the Cute-Lock family drives attacking attempts to a dead end without additional overhead.
arXiv Detail & Related papers (2025-01-29T03:44:55Z) - K-Gate Lock: Multi-Key Logic Locking Using Input Encoding Against Oracle-Guided Attacks [1.104960878651584]
K-Gate Lock encodes input patterns using multiple keys that are applied to one set of key inputs at different operational times.
Uses multiple keys will make the circuit secure against oracle-guided attacks and increase attacker efforts to an exponentially time-consuming brute force search.
arXiv Detail & Related papers (2025-01-03T22:07:38Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - LIPSTICK: Corruptibility-Aware and Explainable Graph Neural Network-based Oracle-Less Attack on Logic Locking [1.104960878651584]
We develop, train, and test a corruptibility-aware graph neural network-based oracle-less attack on logic locking.
Our model is explainable in the sense that we analyze what the machine learning model has interpreted in the training process and how it can perform a successful attack.
arXiv Detail & Related papers (2024-02-06T18:42:51Z) - Evil from Within: Machine Learning Backdoors through Hardware Trojans [51.81518799463544]
Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars.
We introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning.
We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU.
arXiv Detail & Related papers (2023-04-17T16:24:48Z) - Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning
Accelerators [4.605674633999923]
We show how logic locking can be used to compromise the security of a neural accelerator it protects.
Specifically, we show how the deterministic errors caused by incorrect keys can be harnessed to produce neural-trojan-style backdoors.
arXiv Detail & Related papers (2023-04-12T17:55:34Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.