A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent
- URL: http://arxiv.org/abs/2501.18038v1
- Date: Wed, 29 Jan 2025 22:57:56 GMT
- Title: A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent
- Authors: James Brusseau,
- Abstract summary: Acceleration ethics addresses the tension between innovation and safety in artificial intelligence.
It is composed of five elements: innovation solves innovation problems, innovation is intrinsically valuable, the unknown is encouraging, governance is decentralized, ethics is embedded.
The TELUS experience indicates that acceleration AI ethics is a way of maximizing social responsibility through innovation, as opposed to sacrificing social responsibility for innovation.
- Score: 0.0
- License:
- Abstract: Acceleration ethics addresses the tension between innovation and safety in artificial intelligence. The acceleration argument is that the most effective way to approach risks raised by innovation is with still more innovating. This paper begins by defining acceleration ethics theoretically. It is composed of five elements: innovation solves innovation problems, innovation is intrinsically valuable, the unknown is encouraging, governance is decentralized, ethics is embedded. Subsequently, the paper illustrates the acceleration framework with a use-case, a generative artificial intelligence language tool developed by the Canadian telecommunications company TELUS. While the purity of theoretical positions is blurred by real-world ambiguities, the TELUS experience indicates that acceleration AI ethics is a way of maximizing social responsibility through innovation, as opposed to sacrificing social responsibility for innovation, or sacrificing innovation for social responsibility.
Related papers
- Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Shaping AI's Impact on Billions of Lives [27.78474296888659]
We argue for the community of AI practitioners to consciously and proactively work for the common good.
This paper offers a blueprint for a new type of innovation infrastructure.
arXiv Detail & Related papers (2024-12-03T16:29:37Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - The Dual Imperative: Innovation and Regulation in the AI Era [0.0]
This article addresses the societal costs associated with the lack of regulation in Artificial Intelligence.
Over fifty years of AI research, have propelled AI into the mainstream, promising significant economic benefits.
The discourse is polarized between accelerationists, advocating for unfettered technological advancement, and doomers, calling for a slowdown to prevent dystopian outcomes.
arXiv Detail & Related papers (2024-05-23T08:26:25Z) - An ethical study of generative AI from the Actor-Network Theory perspective [3.0224187843434]
We analyze ChatGPT as a case study within the framework of Actor-Network Theory.
We examine the actors and processes of translation involved in the ethical issues related to ChatGPT.
arXiv Detail & Related papers (2024-04-10T02:32:19Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Artificial Intelligence for Real Sustainability? -- What is Artificial
Intelligence and Can it Help with the Sustainability Transformation? [0.0]
This article briefly explains, classifies, and theorises AI technology.
It then politically contextualises that analysis in light of the sustainability discourse.
It argues that AI can play a small role in moving towards sustainable societies.
arXiv Detail & Related papers (2023-06-15T15:40:00Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Acceleration AI Ethics, the Debate between Innovation and Safety, and
Stability AI's Diffusion versus OpenAI's Dall-E [0.0]
This presentation responds by reconfiguring ethics as an innovation accelerator.
The work of ethics is embedded in AI development and application, instead of functioning from outside.
arXiv Detail & Related papers (2022-12-04T14:54:13Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.