The Opportunity to Regulate Cybersecurity in the EU (and the World):
Recommendations for the Cybersecurity Resilience Act
- URL: http://arxiv.org/abs/2205.13196v1
- Date: Thu, 26 May 2022 07:20:44 GMT
- Title: The Opportunity to Regulate Cybersecurity in the EU (and the World):
Recommendations for the Cybersecurity Resilience Act
- Authors: Kaspar Rosager Ludvigsen, Shishir Nagaraja
- Abstract summary: Safety is becoming cybersecurity under most circumstances.
This should be reflected in the Cybersecurity Resilience Act when it is proposed and agreed upon in the European Union.
It is based on what the cybersecurity research community for long have asked for, and on what constitutes clear hard legal rules instead of soft.
- Score: 1.2691047660244335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety is becoming cybersecurity under most circumstances. This should be
reflected in the Cybersecurity Resilience Act when it is proposed and agreed
upon in the European Union. In this paper, we define a range of principles
which this future Act should build upon, a structure and argue why it should be
as broad as possible. It is based on what the cybersecurity research community
for long have asked for, and on what constitutes clear hard legal rules instead
of soft. Important areas such as cybersecurity should be taken seriously, by
regulating it in the same way we see other types of critical infrastructure and
physical structures, and be uncompromising and logical, to encompass the risks
and potential for chaos which its ubiquitous nature entails.
We find that principles which regulate cybersecurity systems' life-cycles in
detail are needed, as is clearly stating what technology is being used, due to
Kirkhoffs principle, and dismissing the idea of technosolutionism. Furthermore,
carefully analysing risks is always necessary, but so is understanding when and
how the systems manufacturers may fail or almost fail. We do this through the
following principles:
Ex ante and Ex post assessment, Safety and Security by Design, Denial of
Obscurity, Dismissal of Infallibility, Systems Acknowledgement, Full
Transparency, Movement towards a Zero-trust Security Model, Cybersecurity
Resilience, Enforced Circular Risk Management, Dependability, Hazard Analysis
and mitigation or limitation, liability, A Clear Reporting Regime, Enforcement
of Certification and Standards, Mandated Verification of Security and
Continuous Servicing.
To this, we suggest that the Act employs similar authorities and mechanisms
as the GDPR and create strong national authorities to coordinate inspection and
enforcement in each Member State, with ENISA being the top and coordinating
organ.
Related papers
- Integrating Cybersecurity Frameworks into IT Security: A Comprehensive Analysis of Threat Mitigation Strategies and Adaptive Technologies [0.0]
The cybersecurity threat landscape is constantly actively making it imperative to develop sound frameworks to protect the IT structures.
This paper aims to discuss the application of cybersecurity frameworks into the IT security with focus placed on the role of such frameworks in addressing the changing nature of cybersecurity threats.
The discussion also singles out such technologies as Artificial Intelligence (AI) and Machine Learning (ML) as the core for real-time threat detection and response mechanisms.
arXiv Detail & Related papers (2025-02-02T03:38:48Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
We propose Authenticated Cyclic Redundancy Integrity Check (ACRIC)
ACRIC preserves backward compatibility without requiring additional hardware and is protocol agnostic.
We show that ACRIC offers robust security with minimal transmission overhead ( 1 ms)
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.
Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.
It aggregates all harmful and beneficial effects into a harmfulness score using fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities [3.447031974719732]
Critical National Infrastructure (CNI) encompasses a nation's essential assets that are fundamental to the operation of society and the economy.
Growing cybersecurity threats targeting these infrastructures can potentially interfere with operations and seriously risk national security and public safety.
We examine the intricate issues raised by cybersecurity risks to vital infrastructure, highlighting these systems' vulnerability to different types of cyberattacks.
arXiv Detail & Related papers (2024-05-08T08:08:50Z) - Assessing The Effectiveness Of Current Cybersecurity Regulations And Policies In The US [0.0]
The study evaluates the impact of these regulations on different sectors and analyzes trends in cybercrime data from 2000 to 2022.
The findings highlight the challenges, successes, and the need for continuous adaptation in the face of evolving cyber threats.
arXiv Detail & Related papers (2024-04-17T15:26:55Z) - Enhancing Energy Sector Resilience: Integrating Security by Design Principles [20.817229569050532]
Security by design (Sbd) is a concept for developing and maintaining systems that are impervious to security attacks.
This document presents the security requirements for the implementation of the SbD in industrial control systems.
arXiv Detail & Related papers (2024-02-18T11:04:22Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.