Secret Breach Prevention in Software Issue Reports
- URL: http://arxiv.org/abs/2410.23657v3
- Date: Thu, 06 Nov 2025 02:17:44 GMT
- Title: Secret Breach Prevention in Software Issue Reports
- Authors: Sadif Ahmed, Md Nafiu Rahman, Zahin Wahab, Gias Uddin, Rifat Shahriyar,
- Abstract summary: accidental exposure of sensitive information is a growing security threat.<n>This study fills that gap through a large-scale analysis and a practical detection pipeline for exposed secrets in GitHub issues.<n>We build a benchmark of 54,148 instances from public GitHub issues, including 5,881 manually verified true secrets.
- Score: 4.177725820146491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the digital era, accidental exposure of sensitive information such as API keys, tokens, and credentials is a growing security threat. While most prior work focuses on detecting secrets in source code, leakage in software issue reports remains largely unexplored. This study fills that gap through a large-scale analysis and a practical detection pipeline for exposed secrets in GitHub issues. Our pipeline combines regular expression-based extraction with large language model (LLM) based contextual classification to detect real secrets and reduce false positives. We build a benchmark of 54,148 instances from public GitHub issues, including 5,881 manually verified true secrets. Using this dataset, we evaluate entropy-based baselines and keyword heuristics used by prior secret detection tools, classical machine learning, deep learning, and LLM-based methods. Regex and entropy based approaches achieve high recall but poor precision, while smaller models such as RoBERTa and CodeBERT greatly improve performance (F1 = 92.70%). Proprietary models like GPT-4o perform moderately in few-shot settings (F1 = 80.13%), and fine-tuned open-source larger LLMs such as Qwen and LLaMA reach up to 94.49% F1. Finally, we also validate our approach on 178 real-world GitHub repositories, achieving an F1-score of 81.6% which demonstrates our approach's strong ability to generalize to in-the-wild scenarios.
Related papers
- Taint-Based Code Slicing for LLMs-based Malicious NPM Package Detection [2.398400814870029]
This paper introduces a novel framework that leverages code slicing techniques for an LLM-based malicious package detection task.<n>We propose a specialized taintbased slicing technique for npm packages, augmented by a backtracking mechanism.<n>An evaluation on a dataset of more than 5000 malicious and benign npm packages demonstrates that our approach isolates security-relevant code, reducing input volume by over 99%.
arXiv Detail & Related papers (2025-12-13T12:56:03Z) - Evaluating Large Language Models in detecting Secrets in Android Apps [11.963737068221436]
Mobile apps often embed authentication secrets, such as API keys, tokens, and client IDs, to integrate with cloud services.<n>Developers often hardcode these credentials into Android apps, exposing them to extraction through reverse engineering.<n>We propose SecretLoc, an LLM-based approach for detecting hardcoded secrets in Android apps.
arXiv Detail & Related papers (2025-10-21T12:59:39Z) - Decompiling Smart Contracts with a Large Language Model [51.49197239479266]
Despite Etherscan's 78,047,845 smart contracts deployed on (as of May 26, 2025), a mere 767,520 ( 1%) are open source.<n>This opacity necessitates the automated semantic analysis of on-chain smart contract bytecode.<n>We introduce a pioneering decompilation pipeline that transforms bytecode into human-readable and semantically faithful Solidity code.
arXiv Detail & Related papers (2025-06-24T13:42:59Z) - Detecting Hard-Coded Credentials in Software Repositories via LLMs [0.0]
Software developers frequently hard-code credentials such as passwords, generic secrets, private keys, and generic tokens in software repositories.<n>These credentials create attack surfaces exploitable by a potential adversary to conduct malicious exploits such as backdoor attacks.<n>Recent detection efforts utilize embedding models to vectorize textual credentials before passing them to classifiers for predictions.<n>Our model outperforms the current state-of-the-art by 13% in F1 measure on the benchmark dataset.
arXiv Detail & Related papers (2025-06-16T04:33:48Z) - Is Compression Really Linear with Code Intelligence? [60.123628177110206]
textitFormat Annealing is a lightweight, transparent training methodology designed to assess the intrinsic capabilities of pre-trained models equitably.<n>Our empirical results reveal a fundamental logarithmic relationship between measured code intelligence and bits-per-character (BPC)<n>Our work provides a more nuanced understanding of compression's role in developing code intelligence and contributes a robust evaluation framework in the code domain.
arXiv Detail & Related papers (2025-05-16T16:59:14Z) - Secret Breach Detection in Source Code with Large Language Models [2.5484785866796833]
Leaking sensitive information in source code remains a persistent security threat.<n>This work aims to enhance secret detection in source code using large language models (LLMs)<n>We evaluate the feasibility of using fine-tuned, smaller models for local deployment.
arXiv Detail & Related papers (2025-04-26T03:33:14Z) - Case Study: Fine-tuning Small Language Models for Accurate and Private CWE Detection in Python Code [0.0]
Large Language Models (LLMs) have demonstrated significant capabilities in understanding and analyzing code for security vulnerabilities.<n>This work explores the potential of Small Language Models (SLMs) as a viable alternative for accurate, on-premise vulnerability detection.
arXiv Detail & Related papers (2025-04-23T10:05:27Z) - Trace Gadgets: Minimizing Code Context for Machine Learning-Based Vulnerability Prediction [8.056137513320065]
This work introduces Trace Gadgets, a novel code representation that minimizes code context by removing non-related code.
As input for ML models, Trace Gadgets provide a minimal but complete context, thereby improving the detection performance.
Our results show that state-of-the-art machine learning models perform best when using Trace Gadgets compared to previous code representations.
arXiv Detail & Related papers (2025-04-18T13:13:39Z) - GaussMark: A Practical Approach for Structural Watermarking of Language Models [61.84270985214254]
GaussMark is a simple, efficient, and relatively robust scheme for watermarking large language models.
We show that GaussMark is reliable, efficient, and relatively robust to corruptions such as insertions, deletions, substitutions, and roundtrip translations.
arXiv Detail & Related papers (2025-01-17T22:30:08Z) - Automating the Detection of Code Vulnerabilities by Analyzing GitHub Issues [6.6681265451722895]
We introduce a new dataset specifically designed for classifying GitHub issues relevant to vulnerability detection.
Results demonstrate the potential of this approach for real-world application in early vulnerability detection.
This work has the potential to enhance the security of open-source software ecosystems.
arXiv Detail & Related papers (2025-01-09T14:13:39Z) - MMLU-CF: A Contamination-free Multi-task Language Understanding Benchmark [57.999567012489706]
We propose a contamination-free and more challenging benchmark called MMLU-CF.<n>This benchmark reassesses LLMs' understanding of world knowledge by averting both unintentional and malicious data leakage.<n>Our evaluation of mainstream LLMs reveals that the powerful GPT-4o achieves merely a 5-shot score of 73.4% and a 0-shot score of 71.9% on the test set.
arXiv Detail & Related papers (2024-12-19T18:58:04Z) - A Combined Feature Embedding Tools for Multi-Class Software Defect and Identification [2.2020053359163305]
We present CodeGraphNet, an experimental method that combines GraphCodeBERT and Graph Convolutional Network approaches.
This method captures intricate relation- ships between features, providing for more exact identification and separation of vulnerabilities.
The DeepTree model, which is a hybrid of a Decision Tree and a Neural Network, outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2024-11-26T17:33:02Z) - Exploring Automatic Cryptographic API Misuse Detection in the Era of LLMs [60.32717556756674]
This paper introduces a systematic evaluation framework to assess Large Language Models in detecting cryptographic misuses.
Our in-depth analysis of 11,940 LLM-generated reports highlights that the inherent instabilities in LLMs can lead to over half of the reports being false positives.
The optimized approach achieves a remarkable detection rate of nearly 90%, surpassing traditional methods and uncovering previously unknown misuses in established benchmarks.
arXiv Detail & Related papers (2024-07-23T15:31:26Z) - Towards Efficient Verification of Constant-Time Cryptographic
Implementations [5.433710892250037]
Constant-time programming discipline is an effective software-based countermeasure against timing side-channel attacks.
We put forward practical verification approaches based on a novel synergy of taint analysis and safety verification of self-composed programs.
Our approach is implemented as a cross-platform and fully automated tool CT-Prover.
arXiv Detail & Related papers (2024-02-21T03:39:14Z) - JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding
over Small Language Models [53.83273575102087]
We propose an unsupervised inference-time approach to authorship obfuscation.
We introduce JAMDEC, a user-controlled, inference-time algorithm for authorship obfuscation.
Our approach builds on small language models such as GPT2-XL in order to help avoid disclosing the original content to proprietary LLM's APIs.
arXiv Detail & Related papers (2024-02-13T19:54:29Z) - Zero-Shot Detection of Machine-Generated Codes [83.0342513054389]
This work proposes a training-free approach for the detection of LLMs-generated codes.
We find that existing training-based or zero-shot text detectors are ineffective in detecting code.
Our method exhibits robustness against revision attacks and generalizes well to Java codes.
arXiv Detail & Related papers (2023-10-08T10:08:21Z) - The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification [3.2925005312612323]
This paper presents the FormAI dataset, a large collection of 112, 000 AI-generated C programs with vulnerability classification.
Every program is labeled with the vulnerabilities found within the source code, indicating the type, line number, and vulnerable function name.
We make the source code available for the 112, 000 programs, accompanied by a separate file containing the vulnerabilities detected in each program.
arXiv Detail & Related papers (2023-07-05T10:39:58Z) - A Comparative Study of Software Secrets Reporting by Secret Detection
Tools [5.9347272469695245]
According to GitGuardian's monitoring of public GitHub repositories, secrets continued accelerating in 2022 by 67% compared to 2021.
We present an evaluation of five open-source and four proprietary tools against a benchmark dataset.
The top three tools based on precision are: GitHub Secret Scanner (75%), Gitleaks (46%), and Commercial X (25%), and based on recall are: Gitleaks (88%), SpectralOps (67%) and TruffleHog (52%)
arXiv Detail & Related papers (2023-07-03T02:32:09Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Security Vulnerability Detection Using Deep Learning Natural Language
Processing [1.4591078795663772]
We model software vulnerability detection as a natural language processing (NLP) problem with source code treated as texts.
For training and testing, we have built a dataset of over 100,000 files in $C$ programming language with 123 types of vulnerabilities.
Experiments generate the best performance of over 93% accuracy in detecting security vulnerabilities.
arXiv Detail & Related papers (2021-05-06T01:28:21Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - A ground-truth dataset and classification model for detecting bots in
GitHub issue and PR comments [70.1864008701113]
Bots are used in Github repositories to automate repetitive activities that are part of the distributed software development process.
This paper proposes a ground-truth dataset, based on a manual analysis with high interrater agreement, of pull request and issue comments in 5,000 distinct Github accounts.
We propose an automated classification model to detect bots, taking as main features the number of empty and non-empty comments of each account, the number of comment patterns, and the inequality between comments within comment patterns.
arXiv Detail & Related papers (2020-10-07T09:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.