Teaching DevOps Security Education with Hands-on Labware: Automated Detection of Security Weakness in Python
- URL: http://arxiv.org/abs/2311.16944v2
- Date: Wed, 3 Apr 2024 20:00:08 GMT
- Title: Teaching DevOps Security Education with Hands-on Labware: Automated Detection of Security Weakness in Python
- Authors: Mst Shapna Akter, Juanjose Rodriguez-Cardenas, Md Mostafizur Rahman, Hossain Shahriar, Akond Rahman, Fan Wu,
- Abstract summary: We introduce hands-on learning modules that enable learners to be familiar with identifying known security weaknesses.
To cultivate an engaging and motivating learning environment, our hands-on approach includes a pre-lab, hands-on and post lab sections.
- Score: 4.280051038571455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of DevOps security education necessitates innovative approaches to effectively address the ever-evolving challenges of cybersecurity. In adopting a student-centered ap-proach, there is the need for the design and development of a comprehensive set of hands-on learning modules. In this paper, we introduce hands-on learning modules that enable learners to be familiar with identifying known security weaknesses, based on taint tracking to accurately pinpoint vulnerable code. To cultivate an engaging and motivating learning environment, our hands-on approach includes a pre-lab, hands-on and post lab sections. They all provide introduction to specific DevOps topics and software security problems at hand, followed by practicing with real world code examples having security issues to detect them using tools. The initial evaluation results from a number of courses across multiple schools show that the hands-on modules are enhancing the interests among students on software security and cybersecurity, while preparing them to address DevOps security vulnerabilities.
Related papers
- Using Real-world Bug Bounty Programs in Secure Coding Course: Experience Report [1.099532646524593]
Training new cybersecurity professionals is a challenging task due to the broad scope of the area.
We propose a solution: integrating a real-world bug bounty programme into cybersecurity curriculum.
We let students choose to participate in a bug bounty programme as an option for the semester assignment in a secure coding course.
arXiv Detail & Related papers (2024-04-18T09:53:49Z) - Modular Neural Network Policies for Learning In-Flight Object Catching
with a Robot Hand-Arm System [55.94648383147838]
We present a modular framework designed to enable a robot hand-arm system to learn how to catch flying objects.
Our framework consists of five core modules: (i) an object state estimator that learns object trajectory prediction, (ii) a catching pose quality network that learns to score and rank object poses for catching, (iii) a reaching control policy trained to move the robot hand to pre-catch poses, and (iv) a grasping control policy trained to perform soft catching motions.
We conduct extensive evaluations of our framework in simulation for each module and the integrated system, to demonstrate high success rates of in-flight
arXiv Detail & Related papers (2023-12-21T16:20:12Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - Developing Hands-on Labs for Source Code Vulnerability Detection with AI [0.0]
We propose a framework including learning modules and hands on labs to guide future IT professionals towards developing secure programming habits.
This thesis our goal is to design learning modules with a set of hands on labs that will introduce students to secure programming practices using source code and log file analysis tools.
arXiv Detail & Related papers (2023-02-01T20:53:58Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - XSS for the Masses: Integrating Security in a Web Programming Course
using a Security Scanner [3.387494280613737]
Cybersecurity education is an important part of undergraduate computing curricula.
Many institutions teach it only in dedicated courses or tracks.
An alternative approach is to integrate cybersecurity concepts across non-security courses.
arXiv Detail & Related papers (2022-04-26T16:20:36Z) - Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges [0.76146285961466]
How to securely develop the machine learning-based modern software systems (MLBSS) remains a big challenge.
latent vulnerabilities and privacy issues exposed to external users and attackers will be largely neglected and hard to be identified.
We consider that security for machine learning-based software systems may arise from inherent system defects or external adversarial attacks.
arXiv Detail & Related papers (2022-01-12T23:20:25Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.