Complying with the EU AI Act
- URL: http://arxiv.org/abs/2307.10458v1
- Date: Wed, 19 Jul 2023 21:04:46 GMT
- Title: Complying with the EU AI Act
- Authors: Jacintha Walters, Diptish Dey, Debarati Bhaumik, Sophie Horsman
- Abstract summary: The EU AI Act is a proposed EU legislation concerning AI systems.
This paper identifies several categories of the AI Act.
The influence of organization characteristics, such as size and sector, is examined to determine the impact on compliance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The EU AI Act is the proposed EU legislation concerning AI systems. This
paper identifies several categories of the AI Act. Based on this
categorization, a questionnaire is developed that serves as a tool to offer
insights by creating quantitative data. Analysis of the data shows various
challenges for organizations in different compliance categories. The influence
of organization characteristics, such as size and sector, is examined to
determine the impact on compliance. The paper will also share qualitative data
on which questions were prevalent among respondents, both on the content of the
AI Act as the application. The paper concludes by stating that there is still
room for improvement in terms of compliance with the AIA and refers to a
related project that examines a solution to help these organizations.
Related papers
- Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI [0.0]
This paper examines the legal implications of the explicit mentioning of automation bias (AB) in the Artificial Intelligence Act (AIA)
TheAIA mandates human oversight for high-risk AI systems and requires providers to enable awareness of AB.
arXiv Detail & Related papers (2025-02-14T09:26:59Z) - On Algorithmic Fairness and the EU Regulations [0.2538209532048867]
paper discusses algorithmic fairness focusing non-discrimination and important laws in the European Union (EU)
Discussion is based on EU discriminations recently enacted for artificial intelligence (AI)
Paper contributes to algorithmic fairness research with a few legal insights enlarging and strengthening also the growing research domain of compliance in software engineering.
arXiv Detail & Related papers (2024-11-13T06:23:54Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)
This article outlines the main building blocks of a model template for the FRIA.
It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - Assessing AI Impact Assessments: A Classroom Study [14.768235460961876]
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems.
Recent efforts from government or private-sector organizations have proposed many diverse instantiations of AIIAs, which take a variety of forms ranging from open-ended questionnaires to graded score-cards.
We conduct a classroom study at a large research-intensive university (R1) in an elective course focused on the societal and ethical implications of AI.
We find preliminary evidence that impact assessments can influence participants' perceptions of the potential
arXiv Detail & Related papers (2023-11-19T01:00:59Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - APPRAISE: a governance framework for innovation with AI systems [0.0]
The EU Artificial Intelligence Act (AIA) is the first serious legislative attempt to contain the harmful effects of AI systems.
This paper proposes a governance framework for AI innovation.
The framework bridges the gap between strategic variables and responsible value creation.
arXiv Detail & Related papers (2023-09-26T12:20:07Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and
Challenges [58.97831696674075]
ABSA aims to analyze and understand people's opinions at the aspect level.
We provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements.
We summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage.
arXiv Detail & Related papers (2022-03-02T12:01:46Z) - A Survey on Methods and Metrics for the Assessment of Explainability
under the Proposed AI Act [2.294014185517203]
This study identifies the requirements that such a metric should possess to ease compliance with the AI Act.
Our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible & accessible.
arXiv Detail & Related papers (2021-10-21T14:27:24Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.