A Deep Genetic Programming based Methodology for Art Media
Classification Robust to Adversarial Perturbations
- URL: http://arxiv.org/abs/2010.01238v1
- Date: Sat, 3 Oct 2020 00:36:34 GMT
- Title: A Deep Genetic Programming based Methodology for Art Media
Classification Robust to Adversarial Perturbations
- Authors: Gustavo Olague and Gerardo Ibarra-Vazquez and Mariana Chan-Ley and
Cesar Puente and Carlos Soubervielle-Montalvo and Axel Martinez
- Abstract summary: Art Media Classification problem is a current research area that has attracted attention due to the complex extraction and analysis of features of high-value art pieces.
A major concern related to its reliability has brought attention because, with small perturbations made intentionally in the input image (adversarial attack), its prediction can be completely changed.
This work presents a Deep Genetic Programming method, called Brain Programming, that competes with deep learning.
- Score: 1.6148039130053087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Art Media Classification problem is a current research area that has
attracted attention due to the complex extraction and analysis of features of
high-value art pieces. The perception of the attributes can not be subjective,
as humans sometimes follow a biased interpretation of artworks while ensuring
automated observation's trustworthiness. Machine Learning has outperformed many
areas through its learning process of artificial feature extraction from images
instead of designing handcrafted feature detectors. However, a major concern
related to its reliability has brought attention because, with small
perturbations made intentionally in the input image (adversarial attack), its
prediction can be completely changed. In this manner, we foresee two ways of
approaching the situation: (1) solve the problem of adversarial attacks in
current neural networks methodologies, or (2) propose a different approach that
can challenge deep learning without the effects of adversarial attacks. The
first one has not been solved yet, and adversarial attacks have become even
more complex to defend. Therefore, this work presents a Deep Genetic
Programming method, called Brain Programming, that competes with deep learning
and studies the transferability of adversarial attacks using two artworks
databases made by art experts. The results show that the Brain Programming
method preserves its performance in comparison with AlexNet, making it robust
to these perturbations and competing to the performance of Deep Learning.
Related papers
- The Anatomy of Adversarial Attacks: Concept-based XAI Dissection [1.2916188356754918]
We study the influence of AAs on the concepts learned by convolutional neural networks (CNNs) using XAI techniques.
AAs induce substantial alterations in the concept composition within the feature space, introducing new concepts or modifying existing ones.
Our findings pave the way for the development of more robust and interpretable deep learning models.
arXiv Detail & Related papers (2024-03-25T13:57:45Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Adversarial Attacks Assessment of Salient Object Detection via Symbolic
Learning [4.613806493425003]
Brain programming is a kind of symbolic learning in the vein of good old-fashioned artificial intelligence.
This work provides evidence that symbolic learning robustness is crucial in designing reliable visual attention systems.
We compare our methodology with five different deep learning approaches, proving that they do not match the symbolic paradigm regarding robustness.
arXiv Detail & Related papers (2023-09-12T01:03:43Z) - Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of
Perturbation and AI Techniques [1.0718756132502771]
adversarial examples are subtle perturbations artfully injected into clean images or videos.
Deepfakes have emerged as a potent tool to manipulate public opinion and tarnish the reputations of public figures.
This article delves into the multifaceted world of adversarial examples, elucidating the underlying principles behind their capacity to deceive deep learning algorithms.
arXiv Detail & Related papers (2023-02-22T23:48:19Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Automated Design of Salient Object Detection Algorithms with Brain
Programming [3.518016233072556]
This research work proposes expanding the artificial dorsal stream using a recent proposal to solve salient object detection problems.
We decided to apply the fusion of visual saliency and image segmentation algorithms as a template.
We present results on a benchmark designed by experts with outstanding results in comparison with the state-of-the-art.
arXiv Detail & Related papers (2022-04-07T20:21:30Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.