Signal Injection Attacks against CCD Image Sensors
- URL: http://arxiv.org/abs/2108.08881v1
- Date: Thu, 19 Aug 2021 19:05:28 GMT
- Title: Signal Injection Attacks against CCD Image Sensors
- Authors: Sebastian K\"ohler, Richard Baker, Ivan Martinovic
- Abstract summary: We show how electromagnetic emanation can be used to manipulate the image information captured by a CCD image sensor.
Our results indicate that the injected distortion can disrupt automated vision-based intelligent systems.
- Score: 20.892354746682223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since cameras have become a crucial part in many safety-critical systems and
applications, such as autonomous vehicles and surveillance, a large body of
academic and non-academic work has shown attacks against their main component -
the image sensor. However, these attacks are limited to coarse-grained and
often suspicious injections because light is used as an attack vector.
Furthermore, due to the nature of optical attacks, they require the
line-of-sight between the adversary and the target camera.
In this paper, we present a novel post-transducer signal injection attack
against CCD image sensors, as they are used in professional, scientific, and
even military settings. We show how electromagnetic emanation can be used to
manipulate the image information captured by a CCD image sensor with the
granularity down to the brightness of individual pixels. We study the
feasibility of our attack and then demonstrate its effects in the scenario of
automatic barcode scanning. Our results indicate that the injected distortion
can disrupt automated vision-based intelligent systems.
Related papers
- Understanding Impacts of Electromagnetic Signal Injection Attacks on Object Detection [33.819549876354515]
This paper quantifies and analyzes the impacts of cyber-physical attacks on object detection models in practice.
Images captured by image sensors may be affected by different factors in real applications, including cyber-physical attacks.
arXiv Detail & Related papers (2024-07-23T09:22:06Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Detection of Adversarial Physical Attacks in Time-Series Image Data [12.923271427789267]
We propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data.
This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes.
We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack.
arXiv Detail & Related papers (2023-04-27T02:08:13Z) - AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision
Systems [5.476763798688862]
"printed adversarial attacks", known as physical adversarial attacks, can successfully mislead perception models.
We propose a camera-based adversarial attack capable of fooling camera-based perception systems over all objects of the same class.
We achieve a drop in average model accuracy of more than $45%$ and $40%$ on VGG19 for ImageNet and Resnet34 for Caltech.
arXiv Detail & Related papers (2023-03-02T15:14:46Z) - X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item
Detection [113.10386151761682]
Adversarial attacks targeting texture-free X-ray images are underexplored.
In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection.
We propose X-Adv to generate physically printable metals that act as an adversarial agent capable of deceiving X-ray detectors.
arXiv Detail & Related papers (2023-02-19T06:31:17Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in
CMOS Image Sensors [21.5487020124302]
A camera's electronic rolling shutter can be exploited to inject fine-grained image disruptions.
We show how an adversary can modulate the laser to hide up to 75% of objects perceived by state-of-the-art detectors.
Our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems.
arXiv Detail & Related papers (2021-01-25T11:14:25Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - GhostImage: Remote Perception Attacks against Camera-based Image
Classification Systems [6.637193297008101]
In vision-based object classification systems imaging sensors perceive the environment and machine learning is then used to detect and classify objects for decision-making purposes.
We demonstrate how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object.
arXiv Detail & Related papers (2020-01-21T21:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.