Adversarial Machine Learning in Text Analysis and Generation
- URL: http://arxiv.org/abs/2101.08675v1
- Date: Thu, 14 Jan 2021 04:37:52 GMT
- Title: Adversarial Machine Learning in Text Analysis and Generation
- Authors: Izzat Alsmadi
- Abstract summary: This paper focuses on studying aspects and research trends in adversarial machine learning specifically in text analysis and generation.
The paper summarizes main research trends in the field such as GAN algorithms, models, types of attacks, and defense against those attacks.
- Score: 1.116812194101501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The research field of adversarial machine learning witnessed a significant
interest in the last few years. A machine learner or model is secure if it can
deliver main objectives with acceptable accuracy, efficiency, etc. while at the
same time, it can resist different types and/or attempts of adversarial
attacks. This paper focuses on studying aspects and research trends in
adversarial machine learning specifically in text analysis and generation. The
paper summarizes main research trends in the field such as GAN algorithms,
models, types of attacks, and defense against those attacks.
Related papers
- Humanizing Machine-Generated Content: Evading AI-Text Detection through Adversarial Attack [24.954755569786396]
We propose a framework for a broader class of adversarial attacks, designed to perform minor perturbations in machine-generated content to evade detection.
We consider two attack settings: white-box and black-box, and employ adversarial learning in dynamic scenarios to assess the potential enhancement of the current detection model's robustness.
The empirical results reveal that the current detection models can be compromised in as little as 10 seconds, leading to the misclassification of machine-generated text as human-written content.
arXiv Detail & Related papers (2024-04-02T12:49:22Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Adversarial Learning in Real-World Fraud Detection: Challenges and
Perspectives [1.5373344688357016]
Fraudulent activities and adversarial attacks threaten machine learning models.
We describe how attacks against fraud detection systems differ from other applications of adversarial machine learning.
arXiv Detail & Related papers (2023-07-03T23:04:49Z) - Deviations in Representations Induced by Adversarial Attacks [0.0]
Research has shown that deep learning models are vulnerable to adversarial attacks.
This finding brought about a new direction in research, whereby algorithms were developed to attack and defend vulnerable networks.
We present a method for measuring and analyzing the deviations in representations induced by adversarial attacks.
arXiv Detail & Related papers (2022-11-07T17:40:08Z) - "Why do so?" -- A Practical Perspective on Machine Learning Security [21.538956161215555]
We analyze attack occurrence and concern with 139 industrial practitioners.
Our results shed light on real-world attacks on deployed machine learning.
Our work paves the way for more research about adversarial machine learning in practice.
arXiv Detail & Related papers (2022-07-11T19:58:56Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.