AI at work -- Mitigating safety and discriminatory risk with technical
standards
- URL: http://arxiv.org/abs/2108.11844v1
- Date: Thu, 26 Aug 2021 15:13:42 GMT
- Title: AI at work -- Mitigating safety and discriminatory risk with technical
standards
- Authors: Nikolas Becker, Pauline Junginger, Lukas Martinez, Daniel Krupka,
Leonie Beining
- Abstract summary: The paper provides an overview and assessment of existing international, European and German standards.
The paper is part of the research project "ExamAI - Testing and Auditing of AI systems"
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The use of artificial intelligence (AI) and AI methods in the workplace holds
both great opportunities as well as risks to occupational safety and
discrimination. In addition to legal regulation, technical standards will play
a key role in mitigating such risk by defining technical requirements for
development and testing of AI systems. This paper provides an overview and
assessment of existing international, European and German standards as well as
those currently under development. The paper is part of the research project
"ExamAI - Testing and Auditing of AI systems" and focusses on the use of AI in
an industrial production environment as well as in the realm of human resource
management (HR).
Related papers
- Risk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems [2.3266896180922187]
We compile an extensive catalog of risk sources and risk management measures for general-purpose AI systems.
This work involves identifying technical, operational, and societal risks across model development, training, and deployment stages.
The catalog is released under a public domain license for ease of direct use by stakeholders in AI governance and standards.
arXiv Detail & Related papers (2024-10-30T21:32:56Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - The Necessity of AI Audit Standards Boards [0.0]
We argue that creating auditing standards is not just insufficient, but actively harmful by proliferating unheeded and inconsistent standards.
Instead, the paper proposes the establishment of an AI Audit Standards Board, responsible for developing and updating auditing methods and standards.
arXiv Detail & Related papers (2024-04-11T15:08:24Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - No Trust without regulation! [0.0]
The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
arXiv Detail & Related papers (2023-09-27T09:08:41Z) - Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions [59.34177693293227]
We explore the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing.
We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern.
arXiv Detail & Related papers (2023-05-19T10:43:06Z) - Legal Provocations for HCI in the Design and Development of Trustworthy
Autonomous Systems [2.575172714412997]
We consider a series of legal provocations emerging from the proposed European Union AI Act 2021 (AIA)
AIA targets AI developments that pose risks to society and citizens fundamental rights, introducing mandatory design and development requirements for high-risk AI systems (HRAIS)
These requirements open up new opportunities for HCI that reach beyond established concerns with the ethics and explainability of AI and situate AI development in human-centered processes and methods of design to enable compliance with regulation and foster societal trust in AI.
arXiv Detail & Related papers (2022-06-15T13:03:43Z) - Beyond Fairness Metrics: Roadblocks and Challenges for Ethical AI in
Practice [2.1485350418225244]
We review practical challenges in building and deploying ethical AI at the scale of contemporary industrial and societal uses.
We argue that a holistic consideration of ethics in the development and deployment of AI systems is necessary for building ethical AI in practice.
arXiv Detail & Related papers (2021-08-11T18:33:17Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.