Using AI/ML to Find and Remediate Enterprise Secrets in Code & Document
Sharing Platforms
- URL: http://arxiv.org/abs/2401.01754v1
- Date: Wed, 3 Jan 2024 14:15:25 GMT
- Title: Using AI/ML to Find and Remediate Enterprise Secrets in Code & Document
Sharing Platforms
- Authors: Gregor Kerr, David Algorry, Senad Ibraimoski, Peter Maciver, Sean
Moran
- Abstract summary: We introduce a new challenge to the software development community: 1) leveraging AI to accurately detect and flag up secrets in code and on popular document sharing platforms.
We introduce two baseline AI models that have good detection performance and propose an automatic mechanism for remediating secrets found in code.
- Score: 2.9248916859490173
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce a new challenge to the software development community: 1)
leveraging AI to accurately detect and flag up secrets in code and on popular
document sharing platforms that frequently used by developers, such as
Confluence and 2) automatically remediating the detections (e.g. by suggesting
password vault functionality). This is a challenging, and mostly unaddressed
task. Existing methods leverage heuristics and regular expressions, that can be
very noisy, and therefore increase toil on developers. The next step -
modifying code itself - to automatically remediate a detection, is a complex
task. We introduce two baseline AI models that have good detection performance
and propose an automatic mechanism for remediating secrets found in code,
opening up the study of this task to the wider community.
Related papers
- No Man is an Island: Towards Fully Automatic Programming by Code Search, Code Generation and Program Repair [9.562123938545522]
toolname can integrate various code search, generation, and repair tools, combining these three research areas together for the first time.
We conduct preliminary experiments to demonstrate the potential of our framework, eg helping CodeLlama solve 267 programming problems with an improvement of 62.53%.
arXiv Detail & Related papers (2024-09-05T06:24:29Z) - Vulnerability Handling of AI-Generated Code -- Existing Solutions and Open Challenges [0.0]
We focus on approaches for vulnerability detection, localization, and repair in AI-generated code.
We highlight open challenges that must be addressed in order to establish a reliable and scalable vulnerability handling process of AI-generated code.
arXiv Detail & Related papers (2024-08-16T06:31:44Z) - VersiCode: Towards Version-controllable Code Generation [58.82709231906735]
Large Language Models (LLMs) have made tremendous strides in code generation, but existing research fails to account for the dynamic nature of software development.
We propose two novel tasks aimed at bridging this gap: version-specific code completion (VSCC) and version-aware code migration (VACM)
We conduct an extensive evaluation on VersiCode, which reveals that version-controllable code generation is indeed a significant challenge.
arXiv Detail & Related papers (2024-06-11T16:15:06Z) - CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code [56.019447113206006]
Large Language Models (LLMs) have achieved remarkable progress in code generation.
CodeIP is a novel multi-bit watermarking technique that embeds additional information to preserve provenance details.
Experiments conducted on a real-world dataset across five programming languages demonstrate the effectiveness of CodeIP.
arXiv Detail & Related papers (2024-04-24T04:25:04Z) - Enhancing Security of AI-Based Code Synthesis with GitHub Copilot via Cheap and Efficient Prompt-Engineering [1.7702475609045947]
One of the reasons developers and companies avoid harnessing their full potential is the questionable security of the generated code.
This paper first reviews the current state-of-the-art and identifies areas for improvement on this issue.
We propose a systematic approach based on prompt-altering methods to achieve better code security of AI-based code generators such as GitHub Copilot.
arXiv Detail & Related papers (2024-03-19T12:13:33Z) - CodeAgent: Autonomous Communicative Agents for Code Review [12.163258651539236]
This work introduces tool, a novel multi-agent Large Language Model (LLM) system for code review automation.
CodeAgent incorporates a supervisory agent, QA-Checker, to ensure that all the agents' contributions address the initial review question.
Results demonstrate CodeAgent's effectiveness, contributing to a new state-of-the-art in code review automation.
arXiv Detail & Related papers (2024-02-03T14:43:14Z) - CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules [51.82044734879657]
We propose CodeChain, a novel framework for inference that elicits modularized code generation through a chain of self-revisions.
We find that CodeChain can significantly boost both modularity as well as correctness of the generated solutions, achieving relative pass@1 improvements of 35% on APPS and 76% on CodeContests.
arXiv Detail & Related papers (2023-10-13T10:17:48Z) - FacTool: Factuality Detection in Generative AI -- A Tool Augmented
Framework for Multi-Task and Multi-Domain Scenarios [87.12753459582116]
A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models.
We propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models.
arXiv Detail & Related papers (2023-07-25T14:20:51Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - Chatbots As Fluent Polyglots: Revisiting Breakthrough Code Snippets [0.0]
The research applies AI-driven code assistants to analyze a selection of influential computer code that has shaped modern technology.
The original contribution of this study was to examine half of the most significant code advances in the last 50 years.
arXiv Detail & Related papers (2023-01-05T23:17:17Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.