The Dangers of Computational Law and Cybersecurity; Perspectives from
Engineering and the AI Act
- URL: http://arxiv.org/abs/2207.00295v1
- Date: Fri, 1 Jul 2022 09:42:11 GMT
- Title: The Dangers of Computational Law and Cybersecurity; Perspectives from
Engineering and the AI Act
- Authors: Kaspar Rosager Ludvigsen, Shishir Nagaraja, Angela Daly
- Abstract summary: Computational Law has begun taking the role in society which has been predicted for some time.
In this paper we go through known issues and discuss them in the various levels, from design to the physical realm.
We present three recommendations which are necessary for computational law to function globally.
- Score: 1.332091725929965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational Law has begun taking the role in society which has been
predicted for some time. Automated decision-making and systems which assist
users are now used in various jurisdictions, but with this maturity come
certain caveats. Computational Law exists on the platforms which enable it, in
this case digital systems, which means that it inherits the same flaws.
Cybersecurity addresses these potential weaknesses. In this paper we go through
known issues and discuss them in the various levels, from design to the
physical realm. We also look at machine-learning specific adversarial problems.
Additionally, we make certain considerations regarding computational law and
existing and future legislation. Finally, we present three recommendations
which are necessary for computational law to function globally, and which
follow ideas in safety and security engineering. As indicated, we find that
computational law must seriously consider that not only does it face the same
risks as other types of software and computer systems, but that failures within
it may cause financial or physical damage, as well as injustice. Consequences
of Computational Legal systems failing are greater than if they were merely
software and hardware. If the system employs machine-learning, it must take
note of the very specific dangers which this brings, of which data poisoning is
the classic example. Computational law must also be explicitly legislated for,
which we show is not the case currently in the EU, and this is also true for
the cybersecurity aspects that will be relevant to it. But there is great hope
in EU's proposed AI Act, which makes an important attempt at taking the
specific problems which Computational Law bring into the legal sphere. Our
recommendations for Computational Law and Cybersecurity are: Accommodation of
threats, adequate use, and that humans remain in the centre of their
deployment.
Related papers
- AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.
Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'
The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Assessing the Benefits and Risks of Quantum Computers [0.7224497621488283]
We review what is currently known on the potential uses and risks of quantum computers.
We identify 2 large-scale trends -- new approximate methods and the commercial exploration of business-relevant quantum applications.
We conclude there is a credible expectation that quantum computers will be capable of performing computations which are economically-impactful.
arXiv Detail & Related papers (2024-01-29T17:21:31Z) - A Case for AI Safety via Law [0.0]
How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question.
Proposed solutions tend toward relying on human intervention in uncertain situations.
This paper makes a case that effective legal systems are the best way to address AI safety.
arXiv Detail & Related papers (2023-07-31T19:55:27Z) - Quantitative study about the estimated impact of the AI Act [0.0]
We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
arXiv Detail & Related papers (2023-03-29T06:23:16Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Imposing Regulation on Advanced Algorithms [0.0]
The book examines universally applicable patterns in administrative decisions and judicial rulings.
It investigates the role and significance of national and indeed supranational regulatory bodies for advanced algorithms.
It considers ENISA, an EU agency that focuses on network and information security, as an interesting candidate for a European regulator of advanced algorithms.
arXiv Detail & Related papers (2020-05-16T20:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.