Time to Separate from StackOverflow and Match with ChatGPT for Encryption
- URL: http://arxiv.org/abs/2406.06164v1
- Date: Mon, 10 Jun 2024 10:56:59 GMT
- Title: Time to Separate from StackOverflow and Match with ChatGPT for Encryption
- Authors: Ehsan Firouzi, Mohammad Ghafari,
- Abstract summary: Security is a top concern among developers, but security issues are pervasive in code snippets.
ChatGPT can effectively aid developers when they engage with it properly.
- Score: 0.09208007322096533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cryptography is known as a challenging topic for developers. We studied StackOverflow posts to identify the problems that developers encounter when using Java Cryptography Architecture (JCA) for symmetric encryption. We investigated security risks that are disseminated in these posts, and we examined whether ChatGPT helps avoid cryptography issues. We found that developers frequently struggle with key and IV generations, as well as padding. Security is a top concern among developers, but security issues are pervasive in code snippets. ChatGPT can effectively aid developers when they engage with it properly. Nevertheless, it does not substitute human expertise, and developers should remain alert.
Related papers
- Secure Semantic Communication With Homomorphic Encryption [52.5344514499035]
This paper explores the feasibility of applying homomorphic encryption to SemCom.
We propose a task-oriented SemCom scheme secured through homomorphic encryption.
arXiv Detail & Related papers (2025-01-17T13:26:14Z) - From Struggle to Simplicity with a Usable and Secure API for Encryption in Java [0.07499722271664144]
SafEncrypt is an API that streamlines encryption tasks for Java developers.
It is built on top of the native Java Cryptography Architecture, and it shields developers from crypto complexities and erroneous low-level details.
arXiv Detail & Related papers (2024-09-08T15:16:12Z) - Where do developers admit their security-related concerns? [0.8180494308743708]
We analyzed different sources of code documentation from four large-scale, real-world, open-source projects in an industrial setting.
We found that developers prefer to document security concerns in source code comments and issue trackers.
arXiv Detail & Related papers (2024-05-17T16:43:58Z) - An Investigation into Misuse of Java Security APIs by Large Language Models [9.453671056356837]
This paper systematically assesses ChatGPT's trustworthiness in code generation for security API use cases in Java.
Around 70% of the code instances across 30 attempts per task contain security API misuse, with 20 distinct misuse types identified.
For roughly half of the tasks, this rate reaches 100%, indicating that there is a long way to go before developers can rely on ChatGPT to securely implement security API code.
arXiv Detail & Related papers (2024-04-04T22:52:41Z) - Just another copy and paste? Comparing the security vulnerabilities of ChatGPT generated code and StackOverflow answers [4.320393382724067]
This study empirically compares the vulnerabilities of ChatGPT and StackOverflow snippets.
ChatGPT contained 248 vulnerabilities compared to the 302 vulnerabilities found in SO snippets, producing 20% fewer vulnerabilities with a statistically significant difference.
Our findings suggest developers are under-educated on insecure code propagation from both platforms.
arXiv Detail & Related papers (2024-03-22T20:06:41Z) - CodeChameleon: Personalized Encryption Framework for Jailbreaking Large
Language Models [49.60006012946767]
We propose CodeChameleon, a novel jailbreak framework based on personalized encryption tactics.
We conduct extensive experiments on 7 Large Language Models, achieving state-of-the-art average Attack Success Rate (ASR)
Remarkably, our method achieves an 86.6% ASR on GPT-4-1106.
arXiv Detail & Related papers (2024-02-26T16:35:59Z) - Exploring ChatGPT's Capabilities on Vulnerability Management [56.4403395100589]
We explore ChatGPT's capabilities on 6 tasks involving the complete vulnerability management process with a large-scale dataset containing 70,346 samples.
One notable example is ChatGPT's proficiency in tasks like generating titles for software bug reports.
Our findings reveal the difficulties encountered by ChatGPT and shed light on promising future directions.
arXiv Detail & Related papers (2023-11-11T11:01:13Z) - GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher [85.18213923151717]
Experimental results show certain ciphers succeed almost 100% of the time to bypass the safety alignment of GPT-4 in several safety domains.
We propose a novel SelfCipher that uses only role play and several demonstrations in natural language to evoke this capability.
arXiv Detail & Related papers (2023-08-12T04:05:57Z) - Evaluating Privacy Questions From Stack Overflow: Can ChatGPT Compete? [1.231476564107544]
ChatGPT has been used as an alternative to generate code or produce responses to developers' questions.
Our results show that most privacy-related questions are related to choice/consent, aggregation, and identification.
arXiv Detail & Related papers (2023-06-19T21:33:04Z) - RiDDLE: Reversible and Diversified De-identification with Latent
Encryptor [57.66174700276893]
This work presents RiDDLE, short for Reversible and Diversified De-identification with Latent Encryptor.
Built upon a pre-learned StyleGAN2 generator, RiDDLE manages to encrypt and decrypt the facial identity within the latent space.
arXiv Detail & Related papers (2023-03-09T11:03:52Z) - TextHide: Tackling Data Privacy in Language Understanding Tasks [54.11691303032022]
TextHide mitigates privacy risks without slowing down training or reducing accuracy.
It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data.
We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations.
arXiv Detail & Related papers (2020-10-12T22:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.