Proceedings of the 2nd International Workshop on Adaptive Cyber Defense
- URL: http://arxiv.org/abs/2308.09520v5
- Date: Tue, 2 Jul 2024 16:45:39 GMT
- Title: Proceedings of the 2nd International Workshop on Adaptive Cyber Defense
- Authors: Marco Carvalho, Damian Marriott, Mark Bilinski, Ahmad Ridley,
- Abstract summary: The 2nd International Workshop on Adaptive Cyber Defense was held at the Florida Institute of Technology, Florida.
This workshop was organized to share research that explores unique applications of Artificial Intelligence (AI) and Machine Learning (ML)
The cyber domain cannot currently be reliably and effectively defended without extensive reliance on human experts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 2nd International Workshop on Adaptive Cyber Defense was held at the Florida Institute of Technology, Florida. This workshop was organized to share research that explores unique applications of Artificial Intelligence (AI) and Machine Learning (ML) as foundational capabilities for the pursuit of adaptive cyber defense. The cyber domain cannot currently be reliably and effectively defended without extensive reliance on human experts. Skilled cyber defenders are in short supply and often cannot respond fast enough to cyber threats. Building on recent advances in AI and ML the Cyber defense research community has been motivated to develop new dynamic and sustainable defenses through the adoption of AI and ML techniques to cyber settings. Bridging critical gaps between AI and Cyber researchers and practitioners can accelerate efforts to create semi-autonomous cyber defenses that can learn to recognize and respond to cyber attacks or discover and mitigate weaknesses in cooperation with other cyber operation systems and human experts. Furthermore, these defenses are expected to be adaptive and able to evolve over time to thwart changes in attacker behavior, changes in the system health and readiness, and natural shifts in user behavior over time. The workshop was comprised of invited keynote talks, technical presentations and a panel discussion about how AI/ML can enable autonomous mitigation of current and future cyber attacks. Workshop submissions were peer reviewed by a panel of domain experts with a proceedings consisting of six technical articles exploring challenging problems of critical importance to national and global security. Participation in this workshop offered new opportunities to stimulate research and innovation in the emerging domain of adaptive and autonomous cyber defense.
Related papers
- Multi-Agent Actor-Critics in Autonomous Cyber Defense [0.5261718469769447]
Multi-Agent Deep Reinforcement Learning (MADRL) presents a promising approach to enhancing the efficacy and resilience of autonomous cyber operations.
We demonstrate each agent is able to learn quickly and counter act on the threats autonomously using MADRL in simulated cyber-attack scenarios.
arXiv Detail & Related papers (2024-10-11T15:15:09Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Proceedings of the 1st International Workshop on Adaptive Cyber Defense [0.0]
The 1st International Workshop on Adaptive Cyber Defense was held as part of the 2021 International Joint Conference on Artificial Intelligence.
The cyber domain cannot currently be reliably and effectively defended without extensive reliance on human experts.
bridging critical gaps between AI and Cyber researchers and practitioners can accelerate efforts to create semi-autonomous cyber defenses.
arXiv Detail & Related papers (2021-08-19T03:41:48Z) - Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism,
and Application Trend of Explainable AI [71.70949497737655]
ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI.
Deep neural networks (DNNs) have undoubtedly brought great success to a wide range of applications in computer vision, computational linguistics, and AI.
However, foundational principles underlying the DNNs' success and their resilience to adversarial attacks are still largely missing.
This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.
arXiv Detail & Related papers (2021-07-16T13:14:16Z) - Prospective Artificial Intelligence Approaches for Active Cyber Defence [1.443536831322927]
Some cybersecurity professionals are speculating AI will enable corresponding new classes of active cyber defence measures.
This position paper updates the roadmap for two of the most promising AI approaches.
It describes why they could help tip the balance back towards defenders.
arXiv Detail & Related papers (2021-04-20T14:07:34Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.