Generating Adversarial Examples with an Optimized Quality
- URL: http://arxiv.org/abs/2007.00146v1
- Date: Tue, 30 Jun 2020 23:05:12 GMT
- Title: Generating Adversarial Examples with an Optimized Quality
- Authors: Aminollah Khormali, DaeHun Nyang, David Mohaisen
- Abstract summary: Deep learning models are vulnerable to Adversarial Examples (AEs),carefully crafted samples to deceive those models.
Recent studies have introduced new adversarial attack methods, but none provided guaranteed quality for the crafted examples.
In this paper, we incorporateImage Quality Assessment (IQA) metrics into the design and generation process of AEs.
- Score: 12.747258403133035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models are widely used in a range of application areas, such as
computer vision, computer security, etc. However, deep learning models are
vulnerable to Adversarial Examples (AEs),carefully crafted samples to deceive
those models. Recent studies have introduced new adversarial attack methods,
but, to the best of our knowledge, none provided guaranteed quality for the
crafted examples as part of their creation, beyond simple quality measures such
as Misclassification Rate (MR). In this paper, we incorporateImage Quality
Assessment (IQA) metrics into the design and generation process of AEs. We
propose an evolutionary-based single- and multi-objective optimization
approaches that generate AEs with high misclassification rate and explicitly
improve the quality, thus indistinguishability, of the samples, while
perturbing only a limited number of pixels. In particular, several IQA metrics,
including edge analysis, Fourier analysis, and feature descriptors, are
leveraged into the process of generating AEs. Unique characteristics of the
evolutionary-based algorithm enable us to simultaneously optimize the
misclassification rate and the IQA metrics of the AEs. In order to evaluate the
performance of the proposed method, we conduct intensive experiments on
different well-known benchmark datasets(MNIST, CIFAR, GTSRB, and Open Image
Dataset V5), while considering various objective optimization configurations.
The results obtained from our experiments, when compared with the exist-ing
attack methods, validate our initial hypothesis that the use ofIQA metrics
within generation process of AEs can substantially improve their quality, while
maintaining high misclassification rate.Finally, transferability and human
perception studies are provided, demonstrating acceptable performance.
Related papers
- Evaluating of Machine Unlearning: Robustness Verification Without Prior Modifications [15.257558809246524]
Unlearning is a process enabling pre-trained models to remove the influence of specific training samples.
Existing verification methods rely on machine learning attack techniques, such as membership inference attacks (MIAs) or backdoor attacks.
We propose a novel verification scheme without any prior modifications, and can support verification on a much larger set.
arXiv Detail & Related papers (2024-10-14T03:19:14Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Uncertainty-aware No-Reference Point Cloud Quality Assessment [25.543217625958462]
This work presents the first probabilistic architecture for no-reference point cloud quality assessment (PCQA)
The proposed method can model the quality judgingity of subjects through a tailored conditional variational autoencoder (AE)
Experiments indicate that our approach mimics previous cutting-edge methods by a large margin and exhibits cross-dataset experiments.
arXiv Detail & Related papers (2024-01-17T02:25:42Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - From Static Benchmarks to Adaptive Testing: Psychometrics in AI Evaluation [60.14902811624433]
We discuss a paradigm shift from static evaluation methods to adaptive testing.
This involves estimating the characteristics and value of each test item in the benchmark and dynamically adjusting items in real-time.
We analyze the current approaches, advantages, and underlying reasons for adopting psychometrics in AI evaluation.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - Uncertainty-Driven Action Quality Assessment [67.20617610820857]
We propose a novel probabilistic model, named Uncertainty-Driven AQA (UD-AQA), to capture the diversity among multiple judge scores.
We generate the estimation of uncertainty for each prediction, which is employed to re-weight AQA regression loss.
Our proposed method achieves competitive results on three benchmarks including the Olympic events MTL-AQA and FineDiving, and the surgical skill JIGSAWS datasets.
arXiv Detail & Related papers (2022-07-29T07:21:15Z) - Few-shot Quality-Diversity Optimization [50.337225556491774]
Quality-Diversity (QD) optimization has been shown to be effective tools in dealing with deceptive minima and sparse rewards in Reinforcement Learning.
We show that, given examples from a task distribution, information about the paths taken by optimization in parameter space can be leveraged to build a prior population, which when used to initialize QD methods in unseen environments, allows for few-shot adaptation.
Experiments carried in both sparse and dense reward settings using robotic manipulation and navigation benchmarks show that it considerably reduces the number of generations that are required for QD optimization in these environments.
arXiv Detail & Related papers (2021-09-14T17:12:20Z) - Comparison of Image Quality Models for Optimization of Image Processing
Systems [41.57409136781606]
We use eleven full-reference IQA models to train deep neural networks for four low-level vision tasks.
Subjective testing on the optimized images allows us to rank the competing models in terms of their perceptual performance.
arXiv Detail & Related papers (2020-05-04T09:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.