Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions
- URL: http://arxiv.org/abs/2305.11581v1
- Date: Fri, 19 May 2023 10:43:06 GMT
- Title: Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions
- Authors: Alexandra Brintrup, George Baryannis, Ashutosh Tiwari, Svetan Ratchev,
Giovanna Martinez-Arellano, Jatinder Singh
- Abstract summary: We explore the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing.
We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern.
- Score: 59.34177693293227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the increased use of AI in the manufacturing sector has been widely
noted, there is little understanding on the risks that it may raise in a
manufacturing organisation. Although various high level frameworks and
definitions have been proposed to consolidate potential risks, practitioners
struggle with understanding and implementing them.
This lack of understanding exposes manufacturing to a multitude of risks,
including the organisation, its workers, as well as suppliers and clients. In
this paper, we explore and interpret the applicability of responsible, ethical,
and trustworthy AI within the context of manufacturing. We then use a broadened
adaptation of a machine learning lifecycle to discuss, through the use of
illustrative examples, how each step may result in a given AI trustworthiness
concern. We additionally propose a number of research questions to the
manufacturing research community, in order to help guide future research so
that the economic and societal benefits envisaged by AI in manufacturing are
delivered safely and responsibly.
Related papers
- Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models [1.7466076090043157]
Large Language Models (LLMs) could transform many fields, but their fast development creates significant challenges for oversight, ethical creation, and building user trust.
This comprehensive review looks at key trust issues in LLMs, such as unintended harms, lack of transparency, vulnerability to attacks, alignment with human values, and environmental impact.
To tackle these issues, we suggest combining ethical oversight, industry accountability, regulation, and public involvement.
arXiv Detail & Related papers (2024-06-01T14:47:58Z) - The Narrow Depth and Breadth of Corporate Responsible AI Research [3.364518262921329]
We show that the majority of AI firms show limited or no engagement in this critical subfield of AI.
Leading AI firms exhibit significantly lower output in responsible AI research compared to their conventional AI research.
Our results highlight the urgent need for industry to publicly engage in responsible AI research.
arXiv Detail & Related papers (2024-05-20T17:26:43Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Applications and Societal Implications of Artificial Intelligence in
Manufacturing: A Systematic Review [0.3867363075280544]
The study finds that there is a predominantly optimistic outlook in prior literature regarding AI's impact on firms.
The paper draws analogies to historical cases and other examples to provide a contextual perspective on potential societal effects of industrial AI.
arXiv Detail & Related papers (2023-07-25T07:17:37Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.