Complying with the EU AI Act
- URL: http://arxiv.org/abs/2307.10458v1
- Date: Wed, 19 Jul 2023 21:04:46 GMT
- Title: Complying with the EU AI Act
- Authors: Jacintha Walters, Diptish Dey, Debarati Bhaumik, Sophie Horsman
- Abstract summary: The EU AI Act is a proposed EU legislation concerning AI systems.
This paper identifies several categories of the AI Act.
The influence of organization characteristics, such as size and sector, is examined to determine the impact on compliance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The EU AI Act is the proposed EU legislation concerning AI systems. This
paper identifies several categories of the AI Act. Based on this
categorization, a questionnaire is developed that serves as a tool to offer
insights by creating quantitative data. Analysis of the data shows various
challenges for organizations in different compliance categories. The influence
of organization characteristics, such as size and sector, is examined to
determine the impact on compliance. The paper will also share qualitative data
on which questions were prevalent among respondents, both on the content of the
AI Act as the application. The paper concludes by stating that there is still
room for improvement in terms of compliance with the AIA and refers to a
related project that examines a solution to help these organizations.
Related papers
- Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - Assessing AI Impact Assessments: A Classroom Study [14.768235460961876]
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems.
Recent efforts from government or private-sector organizations have proposed many diverse instantiations of AIIAs, which take a variety of forms ranging from open-ended questionnaires to graded score-cards.
We conduct a classroom study at a large research-intensive university (R1) in an elective course focused on the societal and ethical implications of AI.
We find preliminary evidence that impact assessments can influence participants' perceptions of the potential
arXiv Detail & Related papers (2023-11-19T01:00:59Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - APPRAISE: a governance framework for innovation with AI systems [0.0]
The EU Artificial Intelligence Act (AIA) is the first serious legislative attempt to contain the harmful effects of AI systems.
This paper proposes a governance framework for AI innovation.
The framework bridges the gap between strategic variables and responsible value creation.
arXiv Detail & Related papers (2023-09-26T12:20:07Z) - Quantitative study about the estimated impact of the AI Act [0.0]
We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
arXiv Detail & Related papers (2023-03-29T06:23:16Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and
Challenges [58.97831696674075]
ABSA aims to analyze and understand people's opinions at the aspect level.
We provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements.
We summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage.
arXiv Detail & Related papers (2022-03-02T12:01:46Z) - A Survey on Methods and Metrics for the Assessment of Explainability
under the Proposed AI Act [2.294014185517203]
This study identifies the requirements that such a metric should possess to ease compliance with the AI Act.
Our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible & accessible.
arXiv Detail & Related papers (2021-10-21T14:27:24Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.