Responsible AI in Business
- URL: http://arxiv.org/abs/2602.13244v1
- Date: Sat, 31 Jan 2026 08:24:20 GMT
- Title: Responsible AI in Business
- Authors: Stephan Sandfuchs, Diako Farooghi, Janis Mohr, Sarah Grewe, Markus Lemmen, Jörg Frochte,
- Abstract summary: It structures Responsible AI along four focal areas that are central for introducing and operating AI systems in a legally compliant, comprehensible, sustainable, and data-sovereign manner.<n>First, it discusses the EU AI Act as a risk-based regulatory framework, including the distinction between provider and deployer roles.<n>Second, it addresses Explainable AI as a basis for transparency and trust, clarifying key notions such as transparency, interpretability, and explainability.<n>Third, it covers Green AI, emphasizing that AI systems should be evaluated not only by performance but also by energy and resource consumption.
- Score: 0.8213113085481418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) and Machine Learning (ML) have moved from research and pilot projects into everyday business operations, with generative AI accelerating adoption across processes, products, and services. This paper introduces the concept of Responsible AI for organizational practice, with a particular focus on small and medium-sized enterprises. It structures Responsible AI along four focal areas that are central for introducing and operating AI systems in a legally compliant, comprehensible, sustainable, and data-sovereign manner. First, it discusses the EU AI Act as a risk-based regulatory framework, including the distinction between provider and deployer roles and the resulting obligations such as risk assessment, documentation, transparency requirements, and AI literacy measures. Second, it addresses Explainable AI as a basis for transparency and trust, clarifying key notions such as transparency, interpretability, and explainability and summarizing practical approaches to make model behavior and decisions more understandable. Third, it covers Green AI, emphasizing that AI systems should be evaluated not only by performance but also by energy and resource consumption, and outlines levers such as model reuse, resource-efficient adaptation, continuous learning, model compression, and monitoring. Fourth, it examines local models (on-premise and edge) as an operating option that supports data protection, control, low latency, and strategic independence, including domain adaptation via fine-tuning and retrieval-augmented generation. The paper concludes with a consolidated set of next steps for establishing governance, documentation, secure operation, sustainability considerations, and an implementation roadmap.
Related papers
- Toward Third-Party Assurance of AI Systems: Design Requirements, Prototype, and Early Testing [16.53658640529767]
We introduce a third-party AI assurance framework that addresses gaps in AI evaluation.<n>We focus on third-party assurance to prevent conflict of interest and ensure credibility and accountability of the process.<n>Our findings show early evidence that our AI assurance framework is sound and comprehensive, usable across different organizational contexts.
arXiv Detail & Related papers (2026-01-30T00:37:12Z) - Toward Safe and Responsible AI Agents: A Three-Pillar Model for Transparency, Accountability, and Trustworthiness [0.0]
This paper presents a conceptual and operational framework for developing and operating safe and trustworthy AI agents.<n>The framework is based on a Three-Pillar Model grounded in transparency, accountability, and trustworthiness.
arXiv Detail & Related papers (2026-01-09T07:27:43Z) - Towards Responsible and Explainable AI Agents with Consensus-Driven Reasoning [4.226647687395254]
This paper presents a Responsible(RAI) and Explainable(XAI) AI Agent Architecture for production-grade agentic based on multi-model consensus and reasoning-layer governance.<n>In the proposed design, a consortium of heterogeneous LLM and VLM agents independently generates candidate outputs from a shared input context.<n>A dedicated reasoning agent then performs structured consolidation across these outputs, enforcing safety and policy constraints, mitigating hallucinations and bias, and producing auditable, evidence-backed decisions.
arXiv Detail & Related papers (2025-12-25T14:49:25Z) - Empowering Real-World: A Survey on the Technology, Practice, and Evaluation of LLM-driven Industry Agents [63.03252293761656]
This paper systematically reviews the technologies, applications, and evaluation methods of industry agents based on large language models (LLMs)<n>We examine the three key technological pillars that support the advancement of agent capabilities: Memory, Planning, and Tool Use.<n>We provide an overview of the application of industry agents in real-world domains such as digital engineering, scientific discovery, embodied intelligence, collaborative business execution, and complex system simulation.
arXiv Detail & Related papers (2025-10-20T12:46:55Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - Web3 x AI Agents: Landscape, Integrations, and Foundational Challenges [49.69200207497795]
The convergence of Web3 technologies and AI agents represents a rapidly evolving frontier poised to reshape decentralized ecosystems.<n>This paper presents the first and most comprehensive analysis of the intersection between Web3 and AI agents, examining five critical dimensions: landscape, economics, governance, security, and trust mechanisms.
arXiv Detail & Related papers (2025-08-04T15:44:58Z) - An Outlook on the Opportunities and Challenges of Multi-Agent AI Systems [32.48561526824382]
A multi-agent AI system (MAS) is composed of multiple autonomous agents that interact, exchange information, and make decisions based on internal generative models.<n>This paper outlines a formal framework for analyzing MAS, focusing on two core aspects: effectiveness and safety.
arXiv Detail & Related papers (2025-05-23T22:05:19Z) - HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT [1.7754875105502606]
The paper highlights AIs transformative nature, driven by autonomy, data, and goal-oriented design.<n>A key challenge is defining and assessing "high-risk" AI systems across industries.<n>It proposes a Fundamental Rights Impact Assessment (FRIA) methodology, a gate-based framework designed to isolate and assess risks.
arXiv Detail & Related papers (2025-03-23T19:10:14Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Responsible Artificial Intelligence Systems: A Roadmap to Society's Trust through Trustworthy AI, Auditability, Accountability, and Governance [37.10526074040908]
This paper explores the concept of a responsible AI system from a holistic perspective.<n>The final goal of the paper is to propose a roadmap in the design of responsible AI systems.
arXiv Detail & Related papers (2025-02-04T14:47:30Z) - Bridging the Communication Gap: Evaluating AI Labeling Practices for Trustworthy AI Development [41.64451715899638]
High-level AI labels, inspired by frameworks like EU energy labels, have been proposed to make the properties of AI models more transparent.<n>This study evaluates AI labeling through qualitative interviews along four key research questions.
arXiv Detail & Related papers (2025-01-21T06:00:14Z) - Decentralized Governance of Autonomous AI Agents [0.0]
ETHOS is a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs)<n>It establishes a global registry for AI agents, enabling dynamic risk classification, proportional oversight, and automated compliance monitoring.<n>By integrating philosophical principles of rationality, ethical grounding, and goal alignment, ETHOS aims to create a robust research agenda for promoting trust, transparency, and participatory governance.
arXiv Detail & Related papers (2024-12-22T18:01:49Z) - OML: A Primitive for Reconciling Open Access with Owner Control in AI Model Distribution [35.68672391812135]
We introduce OML, a primitive that enables a new distribution paradigm for AI models.<n>OML can be freely distributed for local execution while maintaining cryptographically enforced usage authorization.<n>This work opens a new research direction at the intersection of cryptography, machine learning, and mechanism design.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.