Optimizing Code Embeddings and ML Classifiers for Python Source Code Vulnerability Detection
- URL: http://arxiv.org/abs/2509.13134v1
- Date: Tue, 16 Sep 2025 14:52:02 GMT
- Title: Optimizing Code Embeddings and ML Classifiers for Python Source Code Vulnerability Detection
- Authors: Talaya Farasat, Joachim Posegga,
- Abstract summary: This study investigates the optimal combination of code embedding techniques and machine learning classifiers for vulnerability detection in Python source code.<n>We evaluate three embedding techniques, i.e., Word2Vec, CodeBERT, and GraphCodeBERT alongside two deep learning classifiers, Bidirectional Long Short-Term Memory (BiLSTM) networks and Convolutional Neural Networks (CNN)<n>While CNN paired with GraphCodeBERT exhibits strong performance, the BiLSTM model using Word2Vec consistently achieves superior overall results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the growing complexity and scale of source code have rendered manual software vulnerability detection increasingly impractical. To address this challenge, automated approaches leveraging machine learning and code embeddings have gained substantial attention. This study investigates the optimal combination of code embedding techniques and machine learning classifiers for vulnerability detection in Python source code. We evaluate three embedding techniques, i.e., Word2Vec, CodeBERT, and GraphCodeBERT alongside two deep learning classifiers, i.e., Bidirectional Long Short-Term Memory (BiLSTM) networks and Convolutional Neural Networks (CNN). While CNN paired with GraphCodeBERT exhibits strong performance, the BiLSTM model using Word2Vec consistently achieves superior overall results. These findings suggest that, despite the advanced architectures of recent models like CodeBERT and GraphCodeBERT, classical embeddings such as Word2Vec, when used with sequence-based models like BiLSTM, can offer a slight yet consistent performance advantage. The study underscores the critical importance of selecting appropriate combinations of embeddings and classifiers to enhance the effectiveness of automated vulnerability detection systems, particularly for Python source code.
Related papers
- Automated Vulnerability Detection in Source Code Using Deep Representation Learning [0.0]
We present a convolutional neural network model that can successfully identify bugs in C code.<n>We trained our model using two complementary datasets.<n>We also demonstrate on a custom Linux kernel dataset that we are able to find real vulnerabilities in complex code with a low false-positive rate.
arXiv Detail & Related papers (2026-02-26T15:35:17Z) - Revisiting Nearest Neighbor for Tabular Data: A Deep Tabular Baseline Two Decades Later [76.66498833720411]
We introduce a differentiable version of $K$-nearest neighbors (KNN) originally designed to learn a linear projection to capture semantic similarities between instances.<n>Surprisingly, our implementation of NCA using SGD and without dimensionality reduction already achieves decent performance on tabular data.<n>We conclude our paper by analyzing the factors behind these improvements, including loss functions, prediction strategies, and deep architectures.
arXiv Detail & Related papers (2024-07-03T16:38:57Z) - Bi-Directional Transformers vs. word2vec: Discovering Vulnerabilities in Lifted Compiled Code [4.956066467858057]
This research explores vulnerability detection using natural language processing (NLP) embedding techniques with word2vec, BERT, and RoBERTa.
Long short-term memory (LSTM) neural networks were trained on embeddings from encoders created using approximately 48k LLVM functions from the Juliet dataset.
arXiv Detail & Related papers (2024-05-31T03:57:19Z) - Feature Engineering-Based Detection of Buffer Overflow Vulnerability in
Source Code Using Neural Networks [2.9266864570485827]
vulnerability detection method based on neural network models that learn features extracted from source codes.
We maintain the semantic and syntactic information using state of the art word embedding algorithms such as GloVe and fastText.
We have proposed a neural network model that can overcome issues associated with traditional neural networks.
arXiv Detail & Related papers (2023-06-01T01:44:49Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Factorizers for Distributed Sparse Block Codes [45.29870215671697]
We propose a fast and highly accurate method for factorizing distributed block codes (SBCs)
Our iterative factorizer introduces a threshold-based nonlinear activation, conditional random sampling, and an $ell_infty$-based similarity metric.
We demonstrate the feasibility of our method on four deep CNN architectures over CIFAR-100, ImageNet-1K, and RAVEN datasets.
arXiv Detail & Related papers (2023-03-24T12:31:48Z) - Automated Vulnerability Detection in Source Code Using Quantum Natural
Language Processing [0.0]
C and C++ open source code are now available in order to create a large-scale, classical machine-learning and quantum machine-learning system for function-level vulnerability identification.
We created an efficient and scalable vulnerability detection method based on a deep neural network model Long Short Term Memory (LSTM), and quantum machine learning model Long Short Term Memory (QLSTM)
The QLSTM with semantic and syntactic features detects significantly accurate vulnerability and runs faster than its classical counterpart.
arXiv Detail & Related papers (2023-03-13T23:27:42Z) - A comparative study of neural network techniques for automatic software
vulnerability detection [9.443081849443184]
Most commonly used method for detecting software vulnerabilities is static analysis.
Some researchers have proposed to use neural networks that have the ability of automatic feature extraction to improve intelligence of detection.
We have conducted extensive experiments to test the performance of the two most typical neural networks.
arXiv Detail & Related papers (2021-04-29T01:47:30Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Learning to map source code to software vulnerability using
code-as-a-graph [67.62847721118142]
We explore the applicability of Graph Neural Networks in learning the nuances of source code from a security perspective.
We show that a code-as-graph encoding is more meaningful for vulnerability detection than existing code-as-photo and linear sequence encoding approaches.
arXiv Detail & Related papers (2020-06-15T16:05:27Z) - Improved Code Summarization via a Graph Neural Network [96.03715569092523]
In general, source code summarization techniques use the source code as input and outputs a natural language description.
We present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries.
arXiv Detail & Related papers (2020-04-06T17:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.