Cyber Security Requirements for Platforms Enhancing AI Reproducibility
- URL: http://arxiv.org/abs/2309.15525v1
- Date: Wed, 27 Sep 2023 09:43:46 GMT
- Title: Cyber Security Requirements for Platforms Enhancing AI Reproducibility
- Authors: Polra Victor Falade
- Abstract summary: This study focuses on the field of artificial intelligence (AI) and introduces a new framework for evaluating AI platforms.
Five popular AI platforms; Floydhub, BEAT, Codalab, Kaggle, and OpenML were assessed.
The analysis revealed that none of these platforms fully incorporates the necessary cyber security measures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scientific research is increasingly reliant on computational methods, posing
challenges for ensuring research reproducibility. This study focuses on the
field of artificial intelligence (AI) and introduces a new framework for
evaluating AI platforms for reproducibility from a cyber security standpoint to
address the security challenges associated with AI research. Using this
framework, five popular AI reproducibility platforms; Floydhub, BEAT, Codalab,
Kaggle, and OpenML were assessed. The analysis revealed that none of these
platforms fully incorporates the necessary cyber security measures essential
for robust reproducibility. Kaggle and Codalab, however, performed better in
terms of implementing cyber security measures covering aspects like security,
privacy, usability, and trust. Consequently, the study provides tailored
recommendations for different user scenarios, including individual researchers,
small laboratories, and large corporations. It emphasizes the importance of
integrating specific cyber security features into AI platforms to address the
challenges associated with AI reproducibility, ultimately advancing
reproducibility in this field. Moreover, the proposed framework can be applied
beyond AI platforms, serving as a versatile tool for evaluating a wide range of
systems and applications from a cyber security perspective.
Related papers
- Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions [3.99098935469955]
Industry 5.0 focuses on human and Artificial Intelligence (AI) collaboration for performing different tasks in manufacturing.
The huge involvement of these devices and interconnection in various critical areas, such as economy, health, education and defense systems, poses several types of potential security flaws.
XAI has been proven a very effective and powerful tool in different areas of cybersecurity, such as intrusion detection, malware detection, and phishing detection.
arXiv Detail & Related papers (2024-07-21T09:28:05Z) - Confronting the Reproducibility Crisis: A Case Study of Challenges in Cybersecurity AI [0.0]
A key area in AI-based cybersecurity focuses on defending deep neural networks against malicious perturbations.
We attempt to validate results from prior work on certified robustness using the VeriGauge toolkit.
Our findings underscore the urgent need for standardized methodologies, containerization, and comprehensive documentation.
arXiv Detail & Related papers (2024-05-29T04:37:19Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Software Repositories and Machine Learning Research in Cyber Security [0.0]
The integration of robust cyber security defenses has become essential across all phases of software development.
Attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process.
arXiv Detail & Related papers (2023-11-01T17:46:07Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - TanksWorld: A Multi-Agent Environment for AI Safety Research [5.218815947097599]
The ability to create artificial intelligence capable of performing complex tasks is rapidly outpacing our ability to ensure the safe and assured operation of AI-enabled systems.
Recent simulation environments to illustrate AI safety risks are relatively simple or narrowly-focused on a particular issue.
We introduce the AI safety TanksWorld as an environment for AI safety research with three essential aspects.
arXiv Detail & Related papers (2020-02-25T21:00:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.