Rainbow Artifacts from Electromagnetic Signal Injection Attacks on Image Sensors
- URL: http://arxiv.org/abs/2507.07773v1
- Date: Thu, 10 Jul 2025 13:55:35 GMT
- Title: Rainbow Artifacts from Electromagnetic Signal Injection Attacks on Image Sensors
- Authors: Youqian Zhang, Xinyu Ji, Zhihao Wang, Qinhong Jiang,
- Abstract summary: Image sensors are integral to a wide range of safety- and security-critical systems, including surveillance infrastructure, autonomous vehicles, and industrial automation.<n>We investigate a novel class of electromagnetic signal injection attacks that target the analog domain of image sensors.
- Score: 16.11931222046411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image sensors are integral to a wide range of safety- and security-critical systems, including surveillance infrastructure, autonomous vehicles, and industrial automation. These systems rely on the integrity of visual data to make decisions. In this work, we investigate a novel class of electromagnetic signal injection attacks that target the analog domain of image sensors, allowing adversaries to manipulate raw visual inputs without triggering conventional digital integrity checks. We uncover a previously undocumented attack phenomenon on CMOS image sensors: rainbow-like color artifacts induced in images captured by image sensors through carefully tuned electromagnetic interference. We further evaluate the impact of these attacks on state-of-the-art object detection models, showing that the injected artifacts propagate through the image signal processing pipeline and lead to significant mispredictions. Our findings highlight a critical and underexplored vulnerability in the visual perception stack, highlighting the need for more robust defenses against physical-layer attacks in such systems.
Related papers
- RAID: A Dataset for Testing the Adversarial Robustness of AI-Generated Image Detectors [57.81012948133832]
We present RAID (Robust evaluation of AI-generated image Detectors), a dataset of 72k diverse and highly transferable adversarial examples.<n>Our methodology generates adversarial images that transfer with a high success rate to unseen detectors.<n>Our findings indicate that current state-of-the-art AI-generated image detectors can be easily deceived by adversarial examples.
arXiv Detail & Related papers (2025-06-04T14:16:00Z) - Understanding Impacts of Electromagnetic Signal Injection Attacks on Object Detection [33.819549876354515]
This paper quantifies and analyzes the impacts of cyber-physical attacks on object detection models in practice.
Images captured by image sensors may be affected by different factors in real applications, including cyber-physical attacks.
arXiv Detail & Related papers (2024-07-23T09:22:06Z) - Detection of Adversarial Physical Attacks in Time-Series Image Data [12.923271427789267]
We propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data.
This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes.
We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack.
arXiv Detail & Related papers (2023-04-27T02:08:13Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Detecting and Identifying Optical Signal Attacks on Autonomous Driving
Systems [25.32946739108013]
We propose a framework to detect and identify sensors that are under attack.
Specifically, we first develop a new technique to detect attacks on a system that consists of three sensors.
In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme.
arXiv Detail & Related papers (2021-10-20T12:21:04Z) - Signal Injection Attacks against CCD Image Sensors [20.892354746682223]
We show how electromagnetic emanation can be used to manipulate the image information captured by a CCD image sensor.
Our results indicate that the injected distortion can disrupt automated vision-based intelligent systems.
arXiv Detail & Related papers (2021-08-19T19:05:28Z) - Privacy-Preserving Image Acquisition Using Trainable Optical Kernel [50.1239616836174]
We propose a trainable image acquisition method that removes the sensitive identity revealing information in the optical domain before it reaches the image sensor.
As the sensitive content is suppressed before it reaches the image sensor, it does not enter the digital domain therefore is unretrievable by any sort of privacy attack.
arXiv Detail & Related papers (2021-06-28T11:08:14Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Real-Time Detectors for Digital and Physical Adversarial Inputs to
Perception Systems [11.752184033538636]
Deep neural network (DNN) models have proven to be vulnerable to adversarial digital and physical attacks.
We propose a novel attack- and dataset-agnostic and real-time detector for both types of adversarial inputs to DNN-based perception systems.
In particular, the proposed detector relies on the observation that adversarial images are sensitive to certain label-invariant transformations.
arXiv Detail & Related papers (2020-02-23T00:03:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.