Insights on Adversarial Attacks for Tabular Machine Learning via a Systematic Literature Review
- URL: http://arxiv.org/abs/2506.15506v1
- Date: Wed, 18 Jun 2025 14:43:26 GMT
- Title: Insights on Adversarial Attacks for Tabular Machine Learning via a Systematic Literature Review
- Authors: Salijona Dyrmishi, Mohamed Djilani, Thibault Simonetto, Salah Ghamizi, Maxime Cordy,
- Abstract summary: Adversarial attacks in machine learning have been extensively reviewed in areas like computer vision and NLP.<n>We highlight key trends, categorize attack strategies and analyze how they address practical considerations for real-world applicability.
- Score: 13.11649527605611
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Adversarial attacks in machine learning have been extensively reviewed in areas like computer vision and NLP, but research on tabular data remains scattered. This paper provides the first systematic literature review focused on adversarial attacks targeting tabular machine learning models. We highlight key trends, categorize attack strategies and analyze how they address practical considerations for real-world applicability. Additionally, we outline current challenges and open research questions. By offering a clear and structured overview, this review aims to guide future efforts in understanding and addressing adversarial vulnerabilities in tabular machine learning.
Related papers
- The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation [97.0658685969199]
Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks, yet they also exhibit memorization of their training data.<n>This paper synthesizes recent studies and investigates the landscape of memorization, the factors influencing it, and methods for its detection and mitigation.
arXiv Detail & Related papers (2025-07-08T01:30:46Z) - Machine Learning: a Lecture Note [51.31735291774885]
This lecture note is intended to prepare early-year master's and PhD students in data science or a related discipline with foundational ideas in machine learning.<n>It starts with basic ideas in modern machine learning with classification as a main target task.<n>Based on these basic ideas, the lecture note explores in depth the probablistic approach to unsupervised learning.
arXiv Detail & Related papers (2025-05-06T16:03:41Z) - A Survey of Adversarial Defenses in Vision-based Systems: Categorization, Methods and Challenges [4.716918459551686]
Adversarial attacks have emerged as a major challenge to the trustworthy deployment of machine learning models.<n>We present a comprehensive systematization of knowledge on adversarial defenses, focusing on two key computer vision tasks.<n>We map these defenses to the types of adversarial attacks and datasets where they are most effective.
arXiv Detail & Related papers (2025-03-01T07:17:18Z) - A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments [55.60375624503877]
Model Extraction Attacks (MEAs) threaten modern machine learning systems by enabling adversaries to steal models, exposing intellectual property and training data.<n>This survey is motivated by the urgent need to understand how the unique characteristics of cloud, edge, and federated deployments shape attack vectors and defense requirements.<n>We systematically examine the evolution of attack methodologies and defense mechanisms across these environments, demonstrating how environmental factors influence security strategies in critical sectors such as autonomous vehicles, healthcare, and financial services.
arXiv Detail & Related papers (2025-02-22T03:46:50Z) - Proactive Schemes: A Survey of Adversarial Attacks for Social Good [13.213478193134701]
Adversarial attacks in computer vision exploit the vulnerabilities of machine learning models by introducing subtle perturbations to input data.
We examine the rise of proactive schemes-methods that encrypt input data using additional signals termed templates, to enhance the performance of deep learning models.
The survey delves into the methodologies behind these proactive schemes, the encryption and learning processes, and their application to modern computer vision and natural language processing applications.
arXiv Detail & Related papers (2024-09-24T22:31:56Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - A Survey on Poisoning Attacks Against Supervised Machine Learning [0.0]
We present a survey paper to cover the most representative papers in poisoning attacks against supervised machine learning models.
We summarize and compare the methodology and limitations of existing literature.
We conclude this paper with potential improvements and future directions to further exploit and prevent poisoning attacks on supervised models.
arXiv Detail & Related papers (2022-02-05T08:02:22Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z) - Adversarial Machine Learning in Text Analysis and Generation [1.116812194101501]
This paper focuses on studying aspects and research trends in adversarial machine learning specifically in text analysis and generation.
The paper summarizes main research trends in the field such as GAN algorithms, models, types of attacks, and defense against those attacks.
arXiv Detail & Related papers (2021-01-14T04:37:52Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.