TVS Sidekick: Challenges and Practical Insights from Deploying Large Language Models in the Enterprise
- URL: http://arxiv.org/abs/2509.26482v1
- Date: Tue, 30 Sep 2025 16:29:02 GMT
- Title: TVS Sidekick: Challenges and Practical Insights from Deploying Large Language Models in the Enterprise
- Authors: Paula Reyero Lobo, Kevin Johnson, Bill Buchanan, Matthew Shardlow, Ashley Williams, Samuel Attwood,
- Abstract summary: In response to public concern and new regulations for the ethical and responsible use of AI, implementing AI governance frameworks could help to integrate AI within organisations and mitigate associated risks.<n>This paper presents a real-world AI application at TVS Supply Chain Solutions, reporting on the experience developing an AI assistant underpinned by large language models and the ethical, regulatory, and sociotechnical challenges in deployment for enterprise use.
- Score: 5.190160602478253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many enterprises are increasingly adopting Artificial Intelligence (AI) to make internal processes more competitive and efficient. In response to public concern and new regulations for the ethical and responsible use of AI, implementing AI governance frameworks could help to integrate AI within organisations and mitigate associated risks. However, the rapid technological advances and lack of shared ethical AI infrastructures creates barriers to their practical adoption in businesses. This paper presents a real-world AI application at TVS Supply Chain Solutions, reporting on the experience developing an AI assistant underpinned by large language models and the ethical, regulatory, and sociotechnical challenges in deployment for enterprise use.
Related papers
- Governing AI Agents [0.2913760942403036]
Article looks at the economic theory of principal-agent problems and the common law doctrine of agency relationships.<n>It identifies problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty.<n>It argues that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
arXiv Detail & Related papers (2025-01-14T07:55:18Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry [0.0]
We discuss challenges that any organization attempting to operationalize AI governance will have to face.
These include questions concerning how to define the material scope of AI governance.
We hope to provide project managers, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks with general best practices.
arXiv Detail & Related papers (2024-07-07T12:01:42Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - A Brief Overview of AI Governance for Responsible Machine Learning
Systems [3.222802562733787]
This position paper seeks to present a brief introduction to AI governance, which is a framework designed to oversee the responsible use of AI.
Due to the probabilistic nature of AI, the risks associated with it are far greater than traditional technologies.
arXiv Detail & Related papers (2022-11-21T23:48:51Z) - Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance [0.0]
We present an AI governance framework, which targets organizations that develop and use AI systems.
The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice.
arXiv Detail & Related papers (2022-06-01T08:55:27Z) - On the Current and Emerging Challenges of Developing Fair and Ethical AI
Solutions in Financial Services [1.911678487931003]
We show how practical considerations reveal the gaps between high-level principles and concrete, deployed AI applications.
We show how practical considerations reveal the gaps between high-level principles and concrete, deployed AI applications.
arXiv Detail & Related papers (2021-11-02T00:15:04Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.