A LLM Assisted Exploitation of AI-Guardian
- URL: http://arxiv.org/abs/2307.15008v1
- Date: Thu, 20 Jul 2023 17:33:25 GMT
- Title: A LLM Assisted Exploitation of AI-Guardian
- Authors: Nicholas Carlini
- Abstract summary: We evaluate the robustness of AI-Guardian, a recent defense to adversarial examples published at IEEE S&P 2023.
We write none of the code to attack this model, and instead prompt GPT-4 to implement all attack algorithms following our instructions and guidance.
This process was surprisingly effective and efficient, with the language model at times producing code from ambiguous instructions faster than the author of this paper could have done.
- Score: 57.572998144258705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are now highly capable at a diverse range of
tasks. This paper studies whether or not GPT-4, one such LLM, is capable of
assisting researchers in the field of adversarial machine learning. As a case
study, we evaluate the robustness of AI-Guardian, a recent defense to
adversarial examples published at IEEE S&P 2023, a top computer security
conference. We completely break this defense: the proposed scheme does not
increase robustness compared to an undefended baseline.
We write none of the code to attack this model, and instead prompt GPT-4 to
implement all attack algorithms following our instructions and guidance. This
process was surprisingly effective and efficient, with the language model at
times producing code from ambiguous instructions faster than the author of this
paper could have done. We conclude by discussing (1) the warning signs present
in the evaluation that suggested to us AI-Guardian would be broken, and (2) our
experience with designing attacks and performing novel research using the most
recent advances in language modeling.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.