Artificial Bugs for Crowdsearch
- URL: http://arxiv.org/abs/2403.09484v1
- Date: Thu, 14 Mar 2024 15:27:14 GMT
- Title: Artificial Bugs for Crowdsearch
- Authors: Hans Gersbach, Fikri Pitsuwan, Pio Blieske,
- Abstract summary: We suggest augmenting such programs by inserting artificial bugs to increase the incentives to search for real (organic) bugs.
We show that for this, it is sufficient to insert only one artificial bug.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bug bounty programs, where external agents are invited to search and report vulnerabilities (bugs) in exchange for rewards (bounty), have become a major tool for companies to improve their systems. We suggest augmenting such programs by inserting artificial bugs to increase the incentives to search for real (organic) bugs. Using a model of crowdsearch, we identify the efficiency gains by artificial bugs, and we show that for this, it is sufficient to insert only one artificial bug. Artificial bugs are particularly beneficial, for instance, if the designer places high valuations on finding organic bugs or if the budget for bounty is not sufficiently high. We discuss how to implement artificial bugs and outline their further benefits.
Related papers
- A Deep Dive Into How Open-Source Project Maintainers Review and Resolve Bug Bounty Reports [6.814841205623832]
We primarily investigate the perspective of open-source software (OSS) maintainers who have used texttthuntr, a bug bounty platform.
As a result, we categorize 40 identified characteristics into benefits, challenges, helpful features, and wanted features.
We find that private disclosure and project visibility are the most important benefits, while hunters focused on money or CVEs and pressure to review are the most challenging to overcome.
arXiv Detail & Related papers (2024-09-12T00:15:21Z) - BioDiscoveryAgent: An AI Agent for Designing Genetic Perturbation Experiments [112.25067497985447]
We introduce BioDiscoveryAgent, an agent that designs new experiments, reasons about their outcomes, and efficiently navigates the hypothesis space to reach desired solutions.
BioDiscoveryAgent can uniquely design new experiments without the need to train a machine learning model.
It achieves an average of 21% improvement in predicting relevant genetic perturbations across six datasets.
arXiv Detail & Related papers (2024-05-27T19:57:17Z) - Developers' Perception: Fixed Bugs Often Overlooked as Quality Contributions [0.0]
Only a third of programmers perceive the quantity of bugs found and rectified in a repository as indicative of higher quality.
This finding substantiates the notion that programmers often misinterpret the significance of testing and bug reporting.
arXiv Detail & Related papers (2024-03-16T04:40:19Z) - Bugsplainer: Leveraging Code Structures to Explain Software Bugs with
Neural Machine Translation [4.519754139322585]
Bugsplainer generates natural language explanations for software bugs by learning from a large corpus of bug-fix commits.
Bugsplainer leverages code structures to reason about a bug and employs the fine-tuned version of a text generation model, CodeT5.
arXiv Detail & Related papers (2023-08-23T17:35:16Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - Explaining Software Bugs Leveraging Code Structures in Neural Machine
Translation [5.079750706023254]
Bugsplainer generates natural language explanations for software bugs by learning from a large corpus of bug-fix commits.
Our evaluation using three performance metrics shows that Bugsplainer can generate understandable and good explanations according to Google's standard.
We also conduct a developer study involving 20 participants where the explanations from Bugsplainer were found to be more accurate, more precise, more concise and more useful than the baselines.
arXiv Detail & Related papers (2022-12-08T22:19:45Z) - Using Developer Discussions to Guide Fixing Bugs in Software [51.00904399653609]
We propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for additional information from developers.
We demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
arXiv Detail & Related papers (2022-11-11T16:37:33Z) - HyperPUT: Generating Synthetic Faulty Programs to Challenge Bug-Finding
Tools [3.8520163964103835]
We propose a complementary approach that automatically generates programs with seeded bugs.
Our technique, called HyperPUT, builds C programs from a "seed" bug by incrementally applying program transformations.
arXiv Detail & Related papers (2022-09-14T13:09:41Z) - BigIssue: A Realistic Bug Localization Benchmark [89.8240118116093]
BigIssue is a benchmark for realistic bug localization.
We provide a general benchmark with a diversity of real and synthetic Java bugs.
We hope to advance the state of the art in bug localization, in turn improving APR performance and increasing its applicability to the modern development cycle.
arXiv Detail & Related papers (2022-07-21T20:17:53Z) - DapStep: Deep Assignee Prediction for Stack Trace Error rePresentation [61.99379022383108]
We propose new deep learning models to solve the bug triage problem.
The models are based on a bidirectional recurrent neural network with attention and on a convolutional neural network.
To improve the quality of ranking, we propose using additional information from version control system annotations.
arXiv Detail & Related papers (2022-01-14T00:16:57Z) - Semi-supervised reward learning for offline reinforcement learning [71.6909757718301]
Training agents usually requires reward functions, but rewards are seldom available in practice and their engineering is challenging and laborious.
We propose semi-supervised learning algorithms that learn from limited annotations and incorporate unlabelled data.
In our experiments with a simulated robotic arm, we greatly improve upon behavioural cloning and closely approach the performance achieved with ground truth rewards.
arXiv Detail & Related papers (2020-12-12T20:06:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.