Susceptibility of Continual Learning Against Adversarial Attacks
- URL: http://arxiv.org/abs/2207.05225v5
- Date: Sun, 8 Oct 2023 14:41:18 GMT
- Title: Susceptibility of Continual Learning Against Adversarial Attacks
- Authors: Hikmat Khan, Pir Masoom Shah, Syed Farhan Alam Zaidi, Saif ul Islam,
Qasim Zia
- Abstract summary: We investigate the susceptibility of continually learned tasks, including current and previously acquired tasks, to adversarial attacks.
Such susceptibility or vulnerability of learned tasks to adversarial attacks raises profound concerns regarding data integrity and privacy.
We explore the robustness of three regularization-based methods, three replay-based approaches, and one hybrid technique that combines replay and exemplar approaches.
- Score: 1.3749490831384268
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent continual learning approaches have primarily focused on mitigating
catastrophic forgetting. Nevertheless, two critical areas have remained
relatively unexplored: 1) evaluating the robustness of proposed methods and 2)
ensuring the security of learned tasks. This paper investigates the
susceptibility of continually learned tasks, including current and previously
acquired tasks, to adversarial attacks. Specifically, we have observed that any
class belonging to any task can be easily targeted and misclassified as the
desired target class of any other task. Such susceptibility or vulnerability of
learned tasks to adversarial attacks raises profound concerns regarding data
integrity and privacy. To assess the robustness of continual learning
approaches, we consider continual learning approaches in all three scenarios,
i.e., task-incremental learning, domain-incremental learning, and
class-incremental learning. In this regard, we explore the robustness of three
regularization-based methods, three replay-based approaches, and one hybrid
technique that combines replay and exemplar approaches. We empirically
demonstrated that in any setting of continual learning, any class, whether
belonging to the current or previously learned tasks, is susceptible to
misclassification. Our observations identify potential limitations of continual
learning approaches against adversarial attacks and highlight that current
continual learning algorithms could not be suitable for deployment in
real-world settings.
Related papers
- A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning [58.107474025048866]
Forgetting refers to the loss or deterioration of previously acquired knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - Data Poisoning Attack Aiming the Vulnerability of Continual Learning [25.480762565632332]
We present a simple task-specific data poisoning attack that can be used in the learning process of a new task.
We experiment with the attack on the two representative regularization-based continual learning methods.
arXiv Detail & Related papers (2022-11-29T02:28:05Z) - Unveiling the Tapestry: the Interplay of Generalization and Forgetting in Continual Learning [18.61040106667249]
In AI, generalization refers to a model's ability to perform well on out-of-distribution data related to a given task, beyond the data it was trained on.
Continual learning methods often include mechanisms to mitigate catastrophic forgetting, ensuring that knowledge from earlier tasks is retained.
We introduce a simple and effective technique known as Shape-Texture Consistency Regularization (STCR), which caters to continual learning.
arXiv Detail & Related papers (2022-11-21T04:36:24Z) - Continual Learning for Pose-Agnostic Object Recognition in 3D Point
Clouds [5.521693536291449]
This work focuses on pose-agnostic continual learning tasks, where the object's pose changes dynamically and unpredictably.
We propose a novel continual learning model that effectively distillates previous tasks' geometric equivariance information.
The experiments show that our method overcomes the challenge of pose-agnostic scenarios in several mainstream point cloud datasets.
arXiv Detail & Related papers (2022-09-11T11:31:39Z) - Continual Object Detection: A review of definitions, strategies, and
challenges [0.0]
The field of Continual Learning investigates the ability to learn consecutive tasks without losing performance on those previously learned.
We believe that research in continual object detection deserves even more attention due to its vast range of applications in robotics and autonomous vehicles.
arXiv Detail & Related papers (2022-05-30T21:57:48Z) - L2Explorer: A Lifelong Reinforcement Learning Assessment Environment [49.40779372040652]
Reinforcement learning solutions tend to generalize poorly when exposed to new tasks outside of the data distribution they are trained on.
We introduce a framework for continual reinforcement-learning development and assessment using Lifelong Learning Explorer (L2Explorer)
L2Explorer is a new, Unity-based, first-person 3D exploration environment that can be continuously reconfigured to generate a range of tasks and task variants structured into complex evaluation curricula.
arXiv Detail & Related papers (2022-03-14T19:20:26Z) - Rehearsal revealed: The limits and merits of revisiting samples in
continual learning [43.40531878205344]
We provide insight into the limits and merits of rehearsal, one of continual learning's most established methods.
We show that models trained sequentially with rehearsal tend to stay in the same low-loss region after a task has finished, but are at risk of overfitting on its sample memory.
arXiv Detail & Related papers (2021-04-15T13:28:14Z) - Importance Weighted Policy Learning and Adaptation [89.46467771037054]
We study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning.
The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior.
Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.
arXiv Detail & Related papers (2020-09-10T14:16:58Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Understanding the Role of Training Regimes in Continual Learning [51.32945003239048]
Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially.
We study the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks' local minima.
arXiv Detail & Related papers (2020-06-12T06:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.