Adversarial Attacks Assessment of Salient Object Detection via Symbolic
Learning
- URL: http://arxiv.org/abs/2309.05900v1
- Date: Tue, 12 Sep 2023 01:03:43 GMT
- Title: Adversarial Attacks Assessment of Salient Object Detection via Symbolic
Learning
- Authors: Gustavo Olague, Roberto Pineda, Gerardo Ibarra-Vazquez, Matthieu
Olague, Axel Martinez, Sambit Bakshi, Jonathan Vargas and Isnardo Reducindo
- Abstract summary: Brain programming is a kind of symbolic learning in the vein of good old-fashioned artificial intelligence.
This work provides evidence that symbolic learning robustness is crucial in designing reliable visual attention systems.
We compare our methodology with five different deep learning approaches, proving that they do not match the symbolic paradigm regarding robustness.
- Score: 4.613806493425003
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning is at the center of mainstream technology and outperforms
classical approaches to handcrafted feature design. Aside from its learning
process for artificial feature extraction, it has an end-to-end paradigm from
input to output, reaching outstandingly accurate results. However, security
concerns about its robustness to malicious and imperceptible perturbations have
drawn attention since its prediction can be changed entirely. Salient object
detection is a research area where deep convolutional neural networks have
proven effective but whose trustworthiness represents a significant issue
requiring analysis and solutions to hackers' attacks. Brain programming is a
kind of symbolic learning in the vein of good old-fashioned artificial
intelligence. This work provides evidence that symbolic learning robustness is
crucial in designing reliable visual attention systems since it can withstand
even the most intense perturbations. We test this evolutionary computation
methodology against several adversarial attacks and noise perturbations using
standard databases and a real-world problem of a shorebird called the Snowy
Plover portraying a visual attention task. We compare our methodology with five
different deep learning approaches, proving that they do not match the symbolic
paradigm regarding robustness. All neural networks suffer significant
performance losses, while brain programming stands its ground and remains
unaffected. Also, by studying the Snowy Plover, we remark on the importance of
security in surveillance activities regarding wildlife protection and
conservation.
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - A reading survey on adversarial machine learning: Adversarial attacks
and their understanding [6.1678491628787455]
Adversarial Machine Learning exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input.
A class of algorithms called adversarial attacks is proposed to make the neural networks misclassify for various tasks in different domains.
This article provides a survey of existing adversarial attacks and their understanding based on different perspectives.
arXiv Detail & Related papers (2023-08-07T07:37:26Z) - Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of
Perturbation and AI Techniques [1.0718756132502771]
adversarial examples are subtle perturbations artfully injected into clean images or videos.
Deepfakes have emerged as a potent tool to manipulate public opinion and tarnish the reputations of public figures.
This article delves into the multifaceted world of adversarial examples, elucidating the underlying principles behind their capacity to deceive deep learning algorithms.
arXiv Detail & Related papers (2023-02-22T23:48:19Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z) - A Deep Genetic Programming based Methodology for Art Media
Classification Robust to Adversarial Perturbations [1.6148039130053087]
Art Media Classification problem is a current research area that has attracted attention due to the complex extraction and analysis of features of high-value art pieces.
A major concern related to its reliability has brought attention because, with small perturbations made intentionally in the input image (adversarial attack), its prediction can be completely changed.
This work presents a Deep Genetic Programming method, called Brain Programming, that competes with deep learning.
arXiv Detail & Related papers (2020-10-03T00:36:34Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.