Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation
Of Adversarial Attacks In Geospatial Systems
- URL: http://arxiv.org/abs/2312.07389v1
- Date: Tue, 12 Dec 2023 16:05:12 GMT
- Title: Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation
Of Adversarial Attacks In Geospatial Systems
- Authors: Michael Lanier, Aayush Dhakal, Zhexiao Xiong, Arthur Li, Nathan
Jacobs, Yevgeniy Vorobeychik
- Abstract summary: We show how adversarial attacks can degrade confidence in geospatial systems.
We empirically show their threat to remote sensing systems using high-quality SpaceNet datasets.
- Score: 24.953306643091484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In critical operations where aerial imagery plays an essential role, the
integrity and trustworthiness of data are paramount. The emergence of
adversarial attacks, particularly those that exploit control over labels or
employ physically feasible trojans, threatens to erode that trust, making the
analysis and mitigation of these attacks a matter of urgency. We demonstrate
how adversarial attacks can degrade confidence in geospatial systems,
specifically focusing on scenarios where the attacker's control over labels is
restricted and the use of realistic threat vectors. Proposing and evaluating
several innovative attack methodologies, including those tailored to overhead
images, we empirically show their threat to remote sensing systems using
high-quality SpaceNet datasets. Our experimentation reflects the unique
challenges posed by aerial imagery, and these preliminary results not only
reveal the potential risks but also highlight the non-trivial nature of the
problem compared to recent works.
Related papers
- Preliminary Investigation into Uncertainty-Aware Attack Stage Classification [81.28215542218724]
This work addresses the problem of attack stage inference under uncertainty.<n>We propose a classification approach based on Evidential Deep Learning (EDL), which models predictive uncertainty by outputting parameters of a Dirichlet distribution over possible stages.<n>Preliminary experiments in a simulated environment demonstrate that the proposed model can accurately infer the stage of an attack with confidence.
arXiv Detail & Related papers (2025-08-01T06:58:00Z) - Benchmarking Misuse Mitigation Against Covert Adversaries [80.74502950627736]
Existing language model safety evaluations focus on overt attacks and low-stakes tasks.<n>We develop Benchmarks for Stateful Defenses (BSD), a data generation pipeline that automates evaluations of covert attacks and corresponding defenses.<n>Our evaluations indicate that decomposition attacks are effective misuse enablers, and highlight stateful defenses as a countermeasure.
arXiv Detail & Related papers (2025-06-06T17:33:33Z) - Lazarus Group Targets Crypto-Wallets and Financial Data while employing new Tradecrafts [0.0]
This report presents a comprehensive analysis of a malicious software sample, detailing its architecture, behavioral characteristics, and underlying intent.<n>The malware core functionalities, including persistence mechanisms, command-and-control communication, and data exfiltration routines, are identified.<n>This malware analysis report not only reconstructs past adversary actions but also establishes a robust foundation for anticipating and mitigating future attacks.
arXiv Detail & Related papers (2025-05-27T20:13:29Z) - The Silent Saboteur: Imperceptible Adversarial Attacks against Black-Box Retrieval-Augmented Generation Systems [101.68501850486179]
We explore adversarial attacks against retrieval-augmented generation (RAG) systems to identify their vulnerabilities.<n>This task aims to find imperceptible perturbations that retrieve a target document, originally excluded from the initial top-$k$ candidate set.<n>We propose ReGENT, a reinforcement learning-based framework that tracks interactions between the attacker and the target RAG.
arXiv Detail & Related papers (2025-05-24T08:19:25Z) - Real-Time Detection of Insider Threats Using Behavioral Analytics and Deep Evidential Clustering [0.0]
We propose a novel framework for real-time detection of insider threats using behavioral analytics combined with deep evidential clustering.<n>Our system captures and analyzes user activities, applies context-rich behavioral features, and classifies potential threats.<n>We evaluate our framework on benchmark insider threat datasets such as CERT and TWOS, achieving an average detection accuracy of 94.7% and a 38% reduction in false positives compared to traditional clustering methods.
arXiv Detail & Related papers (2025-05-21T11:21:33Z) - Modeling Electromagnetic Signal Injection Attacks on Camera-based Smart Systems: Applications and Mitigation [18.909937495767313]
electromagnetic waves pose a threat to safety- or security-critical systems.
Such attacks enable attackers to manipulate the images remotely, leading to incorrect AI decisions.
We present a pilot study on adversarial training to improve their robustness against attacks.
arXiv Detail & Related papers (2024-08-09T15:33:28Z) - A Survey and Evaluation of Adversarial Attacks for Object Detection [11.48212060875543]
Deep learning models excel in various computer vision tasks but are susceptible to adversarial examples-subtle perturbations in input data that lead to incorrect predictions.
This vulnerability poses significant risks in safety-critical applications such as autonomous vehicles, security surveillance, and aircraft health monitoring.
arXiv Detail & Related papers (2024-08-04T05:22:08Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - On Data Fabrication in Collaborative Vehicular Perception: Attacks and
Countermeasures [22.338269462708368]
Collaborative perception, which greatly enhances the sensing capability of connected and autonomous vehicles (CAVs), brings forth potential security risks.
We propose various real-time data fabrication attacks in which the attacker delivers crafted malicious data to victims in order to perturb their perception results.
Our attacks demonstrate a high success rate of over 86% on high-fidelity simulated scenarios and are realizable in real-world experiments.
We present a systematic anomaly detection approach that enables benign vehicles to jointly reveal malicious fabrication.
arXiv Detail & Related papers (2023-09-22T15:54:04Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Contextual adversarial attack against aerial detection in the physical
world [8.826711009649133]
Deep Neural Networks (DNNs) have been extensively utilized in aerial detection.
Physical attacks have gradually become a hot issue due to they are more practical in the real world.
We propose an innovative contextual attack method against aerial detection in real scenarios.
arXiv Detail & Related papers (2023-02-27T02:57:58Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.