Code Difference Guided Adversarial Example Generation for Deep Code
Models
- URL: http://arxiv.org/abs/2301.02412v2
- Date: Sat, 19 Aug 2023 09:45:49 GMT
- Title: Code Difference Guided Adversarial Example Generation for Deep Code
Models
- Authors: Zhao Tian, Junjie Chen, Zhi Jin
- Abstract summary: Adversarial examples are important to test and enhance the robustness of deep code models.
We propose a novel adversarial example generation technique (i.e., CODA) for testing deep code models.
- Score: 25.01072108219646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are important to test and enhance the robustness of deep
code models. As source code is discrete and has to strictly stick to complex
grammar and semantics constraints, the adversarial example generation
techniques in other domains are hardly applicable. Moreover, the adversarial
example generation techniques specific to deep code models still suffer from
unsatisfactory effectiveness due to the enormous ingredient search space. In
this work, we propose a novel adversarial example generation technique (i.e.,
CODA) for testing deep code models. Its key idea is to use code differences
between the target input (i.e., a given code snippet as the model input) and
reference inputs (i.e., the inputs that have small code differences but
different prediction results with the target input) to guide the generation of
adversarial examples. It considers both structure differences and identifier
differences to preserve the original semantics. Hence, the ingredient search
space can be largely reduced as the one constituted by the two kinds of code
differences, and thus the testing process can be improved by designing and
guiding corresponding equivalent structure transformations and identifier
renaming transformations. Our experiments on 15 deep code models demonstrate
the effectiveness and efficiency of CODA, the naturalness of its generated
examples, and its capability of enhancing model robustness after adversarial
fine-tuning. For example, CODA reveals 88.05% and 72.51% more faults in models
than the state-of-the-art techniques (i.e., CARROT and ALERT) on average,
respectively.
Related papers
- Binary Code Similarity Detection via Graph Contrastive Learning on Intermediate Representations [52.34030226129628]
Binary Code Similarity Detection (BCSD) plays a crucial role in numerous fields, including vulnerability detection, malware analysis, and code reuse identification.
In this paper, we propose IRBinDiff, which mitigates compilation differences by leveraging LLVM-IR with higher-level semantic abstraction.
Our extensive experiments, conducted under varied compilation settings, demonstrate that IRBinDiff outperforms other leading BCSD methods in both One-to-one comparison and One-to-many search scenarios.
arXiv Detail & Related papers (2024-10-24T09:09:20Z) - A Constraint-Enforcing Reward for Adversarial Attacks on Text Classifiers [10.063169009242682]
We train an encoder-decoder paraphrase model to generate adversarial examples.
We adopt a reinforcement learning algorithm and propose a constraint-enforcing reward.
We show how key design choices impact the generated examples and discuss the strengths and weaknesses of the proposed approach.
arXiv Detail & Related papers (2024-05-20T09:33:43Z) - Unified Generation, Reconstruction, and Representation: Generalized Diffusion with Adaptive Latent Encoding-Decoding [90.77521413857448]
Deep generative models are anchored in three core capabilities -- generating new instances, reconstructing inputs, and learning compact representations.
We introduce Generalized generative adversarial-Decoding Diffusion Probabilistic Models (EDDPMs)
EDDPMs generalize the Gaussian noising-denoising in standard diffusion by introducing parameterized encoding-decoding.
Experiments on text, proteins, and images demonstrate the flexibility to handle diverse data and tasks.
arXiv Detail & Related papers (2024-02-29T10:08:57Z) - Between Lines of Code: Unraveling the Distinct Patterns of Machine and Human Programmers [14.018844722021896]
We study the specific patterns that characterize machine- and human-authored code.
We propose DetectCodeGPT, a novel method for detecting machine-generated code.
arXiv Detail & Related papers (2024-01-12T09:15:20Z) - CONCORD: Clone-aware Contrastive Learning for Source Code [64.51161487524436]
Self-supervised pre-training has gained traction for learning generic code representations valuable for many downstream SE tasks.
We argue that it is also essential to factor in how developers code day-to-day for general-purpose representation learning.
In particular, we propose CONCORD, a self-supervised, contrastive learning strategy to place benign clones closer in the representation space while moving deviants further apart.
arXiv Detail & Related papers (2023-06-05T20:39:08Z) - Surfacing Biases in Large Language Models using Contrastive Input
Decoding [12.694066526722203]
Contrastive Input Decoding (CID) is a decoding algorithm to generate text given two inputs.
We use CID to highlight context-specific biases that are hard to detect with standard decoding strategies.
arXiv Detail & Related papers (2023-05-12T11:09:49Z) - A Simple, Yet Effective Approach to Finding Biases in Code Generation [16.094062131137722]
This work shows that current code generation systems exhibit undesired biases inherited from their large language model backbones.
We propose the "block of influence" concept, which enables a modular decomposition and analysis of the coding challenges.
arXiv Detail & Related papers (2022-10-31T15:06:15Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - Benchmarking Deep Models for Salient Object Detection [67.07247772280212]
We construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods.
In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others.
We propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals.
arXiv Detail & Related papers (2022-02-07T03:43:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.