A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent
- URL: http://arxiv.org/abs/2501.18038v1
- Date: Wed, 29 Jan 2025 22:57:56 GMT
- Title: A Case Study in Acceleration AI Ethics: The TELUS GenAI Conversational Agent
- Authors: James Brusseau,
- Abstract summary: Acceleration ethics addresses the tension between innovation and safety in artificial intelligence.<n>It is composed of five elements: innovation solves innovation problems, innovation is intrinsically valuable, the unknown is encouraging, governance is decentralized, ethics is embedded.<n>The TELUS experience indicates that acceleration AI ethics is a way of maximizing social responsibility through innovation, as opposed to sacrificing social responsibility for innovation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Acceleration ethics addresses the tension between innovation and safety in artificial intelligence. The acceleration argument is that the most effective way to approach risks raised by innovation is with still more innovating. This paper begins by defining acceleration ethics theoretically. It is composed of five elements: innovation solves innovation problems, innovation is intrinsically valuable, the unknown is encouraging, governance is decentralized, ethics is embedded. Subsequently, the paper illustrates the acceleration framework with a use-case, a generative artificial intelligence language tool developed by the Canadian telecommunications company TELUS. While the purity of theoretical positions is blurred by real-world ambiguities, the TELUS experience indicates that acceleration AI ethics is a way of maximizing social responsibility through innovation, as opposed to sacrificing social responsibility for innovation, or sacrificing innovation for social responsibility.
Related papers
- The California Report on Frontier AI Policy [110.35302787349856]
Continued progress in frontier AI carries the potential for profound advances in scientific discovery, economic productivity, and broader social well-being.<n>As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI.<n>Report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI.
arXiv Detail & Related papers (2025-06-17T23:33:21Z) - Enterprise Architecture as a Dynamic Capability for Scalable and Sustainable Generative AI adoption: Bridging Innovation and Governance in Large Organisations [55.2480439325792]
Generative Artificial Intelligence is a powerful new technology with the potential to boost innovation and reshape governance in many industries.<n>However, organisations face major challenges in scaling GenAI, including technology complexity, governance gaps and resource misalignments.<n>This study explores how Enterprise Architecture Management can meet the complex requirements of GenAI adoption within large enterprises.
arXiv Detail & Related papers (2025-05-09T07:41:33Z) - From Generative AI to Innovative AI: An Evolutionary Roadmap [0.0]
This paper explores the transition from Generative Artificial Intelligence (GenAI) to Innovative Artificial Intelligence (InAI)
In this context, innovation is defined as the ability to generate novel and useful outputs that go beyond mere replication of learned data.
The paper proposes a roadmap for developing AI systems that can generate content and engage in autonomous problem-solving and creative ideation.
arXiv Detail & Related papers (2025-03-14T14:03:28Z) - Responsible Artificial Intelligence Systems: A Roadmap to Society's Trust through Trustworthy AI, Auditability, Accountability, and Governance [37.10526074040908]
This paper explores the concept of a responsible AI system from a holistic perspective.<n>The final goal of the paper is to propose a roadmap in the design of responsible AI systems.
arXiv Detail & Related papers (2025-02-04T14:47:30Z) - Bridging the Communication Gap: Evaluating AI Labeling Practices for Trustworthy AI Development [41.64451715899638]
High-level AI labels, inspired by frameworks like EU energy labels, have been proposed to make the properties of AI models more transparent.<n>This study evaluates AI labeling through qualitative interviews along four key research questions.
arXiv Detail & Related papers (2025-01-21T06:00:14Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.<n>I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Delegating Responsibilities to Intelligent Autonomous Systems: Challenges and Benefits [1.7205106391379026]
As AI systems operate with autonomy and adaptability, the traditional boundaries of moral responsibility in techno-social systems are being challenged.
This paper explores the evolving discourse on the delegation of responsibilities to intelligent autonomous agents and the ethical implications of such practices.
arXiv Detail & Related papers (2024-11-06T18:40:38Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Boardwalk Empire: How Generative AI is Revolutionizing Economic Paradigms [0.0]
Deep generative models, an integration of generative and deep learning techniques, excel in creating new data beyond analyzing existing ones.
By automating design, optimization, and innovation cycles, Generative AI is reshaping core industrial processes.
In the financial sector, it is transforming risk assessment, trading strategies, and forecasting, demonstrating its profound impact.
arXiv Detail & Related papers (2024-10-19T20:57:16Z) - The Dual Imperative: Innovation and Regulation in the AI Era [0.0]
This article addresses the societal costs associated with the lack of regulation in Artificial Intelligence.
Over fifty years of AI research, have propelled AI into the mainstream, promising significant economic benefits.
The discourse is polarized between accelerationists, advocating for unfettered technological advancement, and doomers, calling for a slowdown to prevent dystopian outcomes.
arXiv Detail & Related papers (2024-05-23T08:26:25Z) - An ethical study of generative AI from the Actor-Network Theory perspective [3.0224187843434]
We analyze ChatGPT as a case study within the framework of Actor-Network Theory.
We examine the actors and processes of translation involved in the ethical issues related to ChatGPT.
arXiv Detail & Related papers (2024-04-10T02:32:19Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Artificial Intelligence for Real Sustainability? -- What is Artificial
Intelligence and Can it Help with the Sustainability Transformation? [0.0]
This article briefly explains, classifies, and theorises AI technology.
It then politically contextualises that analysis in light of the sustainability discourse.
It argues that AI can play a small role in moving towards sustainable societies.
arXiv Detail & Related papers (2023-06-15T15:40:00Z) - Acceleration AI Ethics, the Debate between Innovation and Safety, and
Stability AI's Diffusion versus OpenAI's Dall-E [0.0]
This presentation responds by reconfiguring ethics as an innovation accelerator.
The work of ethics is embedded in AI development and application, instead of functioning from outside.
arXiv Detail & Related papers (2022-12-04T14:54:13Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.