Adversarial Imaging Pipelines
- URL: http://arxiv.org/abs/2102.03728v1
- Date: Sun, 7 Feb 2021 06:10:54 GMT
- Title: Adversarial Imaging Pipelines
- Authors: Buu Phan, Fahim Mannan, Felix Heide
- Abstract summary: We develop an attack that deceives a specific camera ISP while leaving others intact.
We validate the proposed method using recent state-of-the-art automotive hardware ISPs.
- Score: 28.178120782659878
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks play an essential role in understanding deep neural
network predictions and improving their robustness. Existing attack methods aim
to deceive convolutional neural network (CNN)-based classifiers by manipulating
RGB images that are fed directly to the classifiers. However, these approaches
typically neglect the influence of the camera optics and image processing
pipeline (ISP) that produce the network inputs. ISPs transform RAW measurements
to RGB images and traditionally are assumed to preserve adversarial patterns.
However, these low-level pipelines can, in fact, destroy, introduce or amplify
adversarial patterns that can deceive a downstream detector. As a result,
optimized patterns can become adversarial for the classifier after being
transformed by a certain camera ISP and optic but not for others. In this work,
we examine and develop such an attack that deceives a specific camera ISP while
leaving others intact, using the same down-stream classifier. We frame
camera-specific attacks as a multi-task optimization problem, relying on a
differentiable approximation for the ISP itself. We validate the proposed
method using recent state-of-the-art automotive hardware ISPs, achieving 92%
fooling rate when attacking a specific ISP. We demonstrate physical optics
attacks with 90% fooling rate for a specific camera lenses.
Related papers
- Chasing Better Deep Image Priors between Over- and Under-parameterization [63.8954152220162]
We study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity.
LIPworks significantly outperform deep decoders under comparably compact model sizes.
We also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior.
arXiv Detail & Related papers (2024-10-31T17:49:44Z) - Simple Image Signal Processing using Global Context Guidance [56.41827271721955]
Deep learning-based ISPs aim to transform RAW images into DSLR-like RGB images using deep neural networks.
We propose a novel module that can be integrated into any neural ISP to capture the global context information from the full RAW images.
Our model achieves state-of-the-art results on different benchmarks using diverse and real smartphone images.
arXiv Detail & Related papers (2024-04-17T17:11:47Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Deep Nonparametric Convexified Filtering for Computational Photography,
Image Synthesis and Adversarial Defense [1.79487674052027]
We aim to provide a general framework for computational photography that recovers the real scene from imperfect images.
It is consists of a nonparametric deep network to resemble the physical equations behind the image formation.
We empirically verify its capability to defend image classification deep networks against adversary attack algorithms in real-time.
arXiv Detail & Related papers (2023-09-13T04:57:12Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Learning Degradation-Independent Representations for Camera ISP Pipelines [14.195578257521934]
We propose a novel approach to learn degradation-independent representations (DiR) through the refinement of a self-supervised learned baseline representation.
The proposed DiR learning technique has remarkable domain generalization capability and it outperforms state-of-the-art methods across various downstream tasks.
arXiv Detail & Related papers (2023-07-03T05:38:28Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline [5.036532914308395]
In this paper, we develop an image-scaling attack targeting on ISP pipeline, where the crafted adversarial RAW can be transformed into attack image.
To make the adversarial attack more applicable, we consider the gradient-unavailable ISP pipeline, in which a proxy model that well learns the RAW-to-RGB transformations is proposed as the gradient oracles.
arXiv Detail & Related papers (2022-06-02T07:35:50Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Blurring Fools the Network -- Adversarial Attacks by Feature Peak
Suppression and Gaussian Blurring [7.540176446791261]
We propose an adversarial attack demo named peak suppression (PS) by suppressing the values of peak elements in the features of the data.
Experiment results show that PS and well-designed gaussian blurring can form adversarial attacks that completely change classification results of a well-trained target network.
arXiv Detail & Related papers (2020-12-21T15:47:14Z) - Defending Adversarial Examples via DNN Bottleneck Reinforcement [20.08619981108837]
This paper presents a reinforcement scheme to alleviate the vulnerability of Deep Neural Networks (DNN) against adversarial attacks.
By reinforcing the former while maintaining the latter, any redundant information, be it adversarial or not, should be removed from the latent representation.
In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network.
arXiv Detail & Related papers (2020-08-12T11:02:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.