The Opportunity to Regulate Cybersecurity in the EU (and the World):
Recommendations for the Cybersecurity Resilience Act
- URL: http://arxiv.org/abs/2205.13196v1
- Date: Thu, 26 May 2022 07:20:44 GMT
- Title: The Opportunity to Regulate Cybersecurity in the EU (and the World):
Recommendations for the Cybersecurity Resilience Act
- Authors: Kaspar Rosager Ludvigsen, Shishir Nagaraja
- Abstract summary: Safety is becoming cybersecurity under most circumstances.
This should be reflected in the Cybersecurity Resilience Act when it is proposed and agreed upon in the European Union.
It is based on what the cybersecurity research community for long have asked for, and on what constitutes clear hard legal rules instead of soft.
- Score: 1.2691047660244335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety is becoming cybersecurity under most circumstances. This should be
reflected in the Cybersecurity Resilience Act when it is proposed and agreed
upon in the European Union. In this paper, we define a range of principles
which this future Act should build upon, a structure and argue why it should be
as broad as possible. It is based on what the cybersecurity research community
for long have asked for, and on what constitutes clear hard legal rules instead
of soft. Important areas such as cybersecurity should be taken seriously, by
regulating it in the same way we see other types of critical infrastructure and
physical structures, and be uncompromising and logical, to encompass the risks
and potential for chaos which its ubiquitous nature entails.
We find that principles which regulate cybersecurity systems' life-cycles in
detail are needed, as is clearly stating what technology is being used, due to
Kirkhoffs principle, and dismissing the idea of technosolutionism. Furthermore,
carefully analysing risks is always necessary, but so is understanding when and
how the systems manufacturers may fail or almost fail. We do this through the
following principles:
Ex ante and Ex post assessment, Safety and Security by Design, Denial of
Obscurity, Dismissal of Infallibility, Systems Acknowledgement, Full
Transparency, Movement towards a Zero-trust Security Model, Cybersecurity
Resilience, Enforced Circular Risk Management, Dependability, Hazard Analysis
and mitigation or limitation, liability, A Clear Reporting Regime, Enforcement
of Certification and Standards, Mandated Verification of Security and
Continuous Servicing.
To this, we suggest that the Act employs similar authorities and mechanisms
as the GDPR and create strong national authorities to coordinate inspection and
enforcement in each Member State, with ENISA being the top and coordinating
organ.
Related papers
- Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities [3.447031974719732]
Critical National Infrastructure (CNI) encompasses a nation's essential assets that are fundamental to the operation of society and the economy.
Growing cybersecurity threats targeting these infrastructures can potentially interfere with operations and seriously risk national security and public safety.
We examine the intricate issues raised by cybersecurity risks to vital infrastructure, highlighting these systems' vulnerability to different types of cyberattacks.
arXiv Detail & Related papers (2024-05-08T08:08:50Z) - Assessing The Effectiveness Of Current Cybersecurity Regulations And Policies In The US [0.0]
The study evaluates the impact of these regulations on different sectors and analyzes trends in cybercrime data from 2000 to 2022.
The findings highlight the challenges, successes, and the need for continuous adaptation in the face of evolving cyber threats.
arXiv Detail & Related papers (2024-04-17T15:26:55Z) - Enhancing Energy Sector Resilience: Integrating Security by Design Principles [20.817229569050532]
Security by design (Sbd) is a concept for developing and maintaining systems that are impervious to security attacks.
This document presents the security requirements for the implementation of the SbD in industrial control systems.
arXiv Detail & Related papers (2024-02-18T11:04:22Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.