Smart Cuts: Enhance Active Learning for Vulnerability Detection by Pruning Bad Seeds
- URL: http://arxiv.org/abs/2506.20444v1
- Date: Wed, 25 Jun 2025 13:50:21 GMT
- Title: Smart Cuts: Enhance Active Learning for Vulnerability Detection by Pruning Bad Seeds
- Authors: Xiang Lan, Tim Menzies, Bowen Xu,
- Abstract summary: Vulnerability detection is crucial for identifying security weaknesses in software systems.<n>This paper proposes a novel dataset maps-empowered approach that identifies and mitigates hard-to-learn outliers.<n>Our approach can categorize training examples based on learning difficulty and integrate this information into an active learning framework.
- Score: 15.490968013867562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vulnerability detection is crucial for identifying security weaknesses in software systems. However, the effectiveness of machine learning models in this domain is often hindered by low-quality training datasets, which contain noisy, mislabeled, or imbalanced samples. This paper proposes a novel dataset maps-empowered approach that systematically identifies and mitigates hard-to-learn outliers, referred to as "bad seeds", to improve model training efficiency. Our approach can categorize training examples based on learning difficulty and integrate this information into an active learning framework. Unlike traditional methods that focus on uncertainty-based sampling, our strategy prioritizes dataset quality by filtering out performance-harmful samples while emphasizing informative ones. Our experimental results show that our approach can improve F1 score over random selection by 45.36% (DeepGini) and 45.91% (K-Means) and outperforms standard active learning by 61.46% (DeepGini) and 32.65% (K-Means) for CodeBERT on the Big-Vul dataset, demonstrating the effectiveness of integrating dataset maps for optimizing sample selection in vulnerability detection. Furthermore, our approach also enhances model robustness, improves sample selection by filtering bad seeds, and stabilizes active learning performance across iterations. By analyzing the characteristics of these outliers, we provide insights for future improvements in dataset construction, making vulnerability detection more reliable and cost-effective.
Related papers
- Z-Error Loss for Training Neural Networks [0.0]
Outliers introduce significant training challenges in neural networks by propagating erroneous gradients, which can degrade model performance and generalization.<n>We propose the Z-Error Loss, a statistically principled approach that minimizes outlier influence during training by masking the contribution of data points identified as out-of-distribution within each batch.
arXiv Detail & Related papers (2025-06-02T18:35:30Z) - Contrastive and Variational Approaches in Self-Supervised Learning for Complex Data Mining [36.772769830368475]
This study analyzed the role of self-supervised learning methods in complex data mining through systematic experiments.<n>Results show that the model has strong adaptability on different data sets, can effectively extract high-quality features from unlabeled data, and improves classification accuracy.
arXiv Detail & Related papers (2025-04-05T02:55:44Z) - Improving the Efficiency of Self-Supervised Adversarial Training through Latent Clustering-Based Selection [2.7554677967598047]
adversarially robust learning is widely recognized to demand significantly more training examples.<n>Recent works propose the use of self-supervised adversarial training with external or synthetically generated unlabeled data to enhance model robustness.<n>We propose novel methods to strategically select a small subset of unlabeled data essential for SSAT and robustness improvement.
arXiv Detail & Related papers (2025-01-15T15:47:49Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - DRoP: Distributionally Robust Data Pruning [11.930434318557156]
We conduct the first systematic study of the impact of data pruning on classification bias of trained models.<n>We propose DRoP, a distributionally robust approach to pruning and empirically demonstrate its performance on standard computer vision benchmarks.
arXiv Detail & Related papers (2024-04-08T14:55:35Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.