Artificial Intelligence Ethics Education in Cybersecurity: Challenges
and Opportunities: a focus group report
- URL: http://arxiv.org/abs/2311.00903v1
- Date: Thu, 2 Nov 2023 00:08:07 GMT
- Title: Artificial Intelligence Ethics Education in Cybersecurity: Challenges
and Opportunities: a focus group report
- Authors: Diane Jackson, Sorin Adam Matei, and Elisa Bertino
- Abstract summary: The emergence of AI tools in cybersecurity creates many opportunities and uncertainties.
Confronting the "black box" mentality in AI cybersecurity work is also of the greatest importance.
Future AI educators and practitioners need to address these issues by implementing rigorous technical training curricula.
- Score: 10.547686057159309
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The emergence of AI tools in cybersecurity creates many opportunities and
uncertainties. A focus group with advanced graduate students in cybersecurity
revealed the potential depth and breadth of the challenges and opportunities.
The salient issues are access to open source or free tools, documentation,
curricular diversity, and clear articulation of ethical principles for AI
cybersecurity education. Confronting the "black box" mentality in AI
cybersecurity work is also of the greatest importance, doubled by deeper and
prior education in foundational AI work. Systems thinking and effective
communication were considered relevant areas of educational improvement. Future
AI educators and practitioners need to address these issues by implementing
rigorous technical training curricula, clear documentation, and frameworks for
ethically monitoring AI combined with critical and system's thinking and
communication skills.
Related papers
- Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Ontology-Aware RAG for Improved Question-Answering in Cybersecurity Education [13.838970688067725]
AI-driven question-answering (QA) systems can actively manage uncertainty in cybersecurity problem-solving.
Large language models (LLMs) have gained prominence in AI-driven QA systems, offering advanced language understanding and user engagement.
We propose CyberRAG, an ontology-aware retrieval-augmented generation (RAG) approach for developing a reliable and safe QA system in cybersecurity education.
arXiv Detail & Related papers (2024-12-10T21:52:35Z) - Generative AI Literacy: Twelve Defining Competencies [48.90506360377104]
This paper introduces a competency-based model for generative artificial intelligence (AI) literacy covering essential skills and knowledge areas necessary to interact with generative AI.
The competencies range from foundational AI literacy to prompt engineering and programming skills, including ethical and legal considerations.
These twelve competencies offer a framework for individuals, policymakers, government officials, and educators looking to navigate and take advantage of the potential of generative AI responsibly.
arXiv Detail & Related papers (2024-11-29T14:55:15Z) - Artificial Intelligence in Cybersecurity: Building Resilient Cyber Diplomacy Frameworks [0.0]
This paper explores how automation and artificial intelligence (AI) are transforming U.S. cyber diplomacy.
Leveraging these technologies helps the U.S. manage the complexity and urgency of cyber diplomacy.
arXiv Detail & Related papers (2024-11-17T17:57:17Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Educating for AI Cybersecurity Work and Research: Ethics, Systems
Thinking, and Communication Requirements [14.004571093079623]
Managers and professors perceive preparedness to use AI tools in cybersecurity to be significantly associated with all three non-technical skill sets.
Contrary to expectations, ethical concerns are left behind in the rush to adopt the most advanced AI tools in security.
Professors over-estimate students' preparedness for ethical, system thinking, and communication abilities.
arXiv Detail & Related papers (2023-11-07T20:06:38Z) - AI Potentiality and Awareness: A Position Paper from the Perspective of
Human-AI Teaming in Cybersecurity [18.324118502535775]
We argue that human-AI teaming is worthwhile in cybersecurity.
We emphasize the importance of a balanced approach that incorporates AI's computational power with human expertise.
arXiv Detail & Related papers (2023-09-28T01:20:44Z) - Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and
Legal Implications [0.4665186371356556]
In July 2022, the Center for Security and Emerging Technology at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities.
Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation.
arXiv Detail & Related papers (2023-05-23T22:27:53Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.