Workday's Approach to Secure and Compliant Cloud ERP Systems
- URL: http://arxiv.org/abs/2511.02856v1
- Date: Fri, 31 Oct 2025 12:25:06 GMT
- Title: Workday's Approach to Secure and Compliant Cloud ERP Systems
- Authors: Monu Sharma,
- Abstract summary: Workday's compliance with global standards shows its ability to best protect critical financial, healthcare, and government data.<n>A comparative review demonstrates enhanced risk management, operational flexibility, and breach mitigation.<n>The paper also explores emerging trends, including the integration of AI, machine learning, and blockchain technologies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Workday's compliance with global standards -- such as GDPR, SOC 2, HIPAA, ISO 27001, and FedRAMP -- shows its ability to best protect critical financial, healthcare, and government data.Automated compliance attributes like audit trails, behavioral analytics, and continuous reporting improve automation of the process and cut down on the manual effort to audit. A comparative review demonstrates enhanced risk management, operational flexibility, and breach mitigation. The paper also discusses potential future solutions with AI, ML and blockchain, to enhance attackdetection and data integrity. Overall, Workday turns out to be a secure, compliant and future-ready ERP solution. The paper also explores emerging trends, including the integration of AI, machine learning, and blockchain technologies to enhance next-generation threat detection and data integrity. The findings position Workday as a reliable, compliant, and future-ready ERP solution, setting a new benchmark for secure enterprise cloud management.
Related papers
- Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - The SMART+ Framework for AI Systems [0.2039123720459736]
Artificial Intelligence (AI) systems are now an integral part of multiple industries.<n>In clinical research, AI supports automated adverse event detection in clinical trials, patient eligibility screening for protocol enrollment, and data quality validation.<n>Beyond healthcare, AI is transforming finance through real-time fraud detection, automated loan risk assessment, and algorithmic decision-making.
arXiv Detail & Related papers (2025-12-09T13:33:14Z) - AI-Enabled Orchestration of Event-Driven Business Processes in Workday ERP for Healthcare Enterprises [0.0]
This study proposes an AI-enabled event-driven orchestration framework within Workday ERP.<n>The framework intelligently synchronizes financial and supply-chain across distributed healthcare entities.<n>Results confirm that embedding AI capabilities into Workday's event-based architecture enhances operational resilience, governance, and scalability.
arXiv Detail & Related papers (2025-11-19T20:18:10Z) - Comparative Security Performance of Workday Cloud ERP Across Key Dimensions [4.994627793890095]
This study examines five key dimensions confidentiality, integrity, availability, authentication, and compliance with weighted sub metric analysis and qualitative document review.<n>The platform uses encryption protocols, granular access controls, network safeguards, and continuous verification mechanisms to enable least-privilege access and adaptive defense.
arXiv Detail & Related papers (2025-11-19T19:57:37Z) - Rethinking Autonomy: Preventing Failures in AI-Driven Software Engineering [1.6766200616088744]
SAFE-AI Framework is a holistic approach emphasizing Safety, Auditability, Feedback, and Explainability.<n>We introduce a novel taxonomy of AI behaviors categorizing suggestive, generative, autonomous, and destructive actions to guide risk assessment and oversight.<n>This paper provides a roadmap for responsible AI integration in software engineering, aligning with emerging regulations like the EU AI Act and Canada's AIDA.
arXiv Detail & Related papers (2025-08-15T22:13:54Z) - Machine Learning based Enterprise Financial Audit Framework and High Risk Identification [6.433444278723668]
This study proposes an AI-driven framework for enterprise financial audits and high-risk identification.<n>Using a dataset from the Big Four accounting firms (EY, PwC, Deloitte, KPMG) from 2020 to 2025, the research examines trends in risk assessment, compliance violations, and fraud detection.<n>The study recommends adopting Random Forest as a core model, enhancing features via engineering, and implementing real-time risk monitoring.
arXiv Detail & Related papers (2025-07-08T00:22:49Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - SafeAgent: Safeguarding LLM Agents via an Automated Risk Simulator [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - Engineering Risk-Aware, Security-by-Design Frameworks for Assurance of Large-Scale Autonomous AI Models [0.0]
This paper presents an enterprise-level, risk-aware, security-by-design approach for large-scale autonomous AI systems.<n>We detail a unified pipeline that delivers provable guarantees of model behavior under adversarial and operational stress.<n>Case studies in national security, open-source model governance, and industrial automation demonstrate measurable reductions in vulnerability and compliance overhead.
arXiv Detail & Related papers (2025-05-09T20:14:53Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Resilient Cloud cluster with DevSecOps security model, automates a data analysis, vulnerability search and risk calculation [0.0]
The article presents the main methods of deploying web applications, ways to increase the level of information security at all stages of product development.<n>The cloud cluster was deployed using Terraform and the Jenkins pipeline, which checks program code for vulnerabilities.<n>The algorithm for calculating risk and losses is based on statistical data and the concept of the FAIR information risk assessment methodology.
arXiv Detail & Related papers (2024-12-15T13:11:48Z) - AutoPT: How Far Are We from the End2End Automated Web Penetration Testing? [54.65079443902714]
We introduce AutoPT, an automated penetration testing agent based on the principle of PSM driven by LLMs.
Our results show that AutoPT outperforms the baseline framework ReAct on the GPT-4o mini model.
arXiv Detail & Related papers (2024-11-02T13:24:30Z) - Enhancing Enterprise Security with Zero Trust Architecture [0.0]
Zero Trust Architecture (ZTA) represents a transformative approach to modern cybersecurity.
ZTA shifts the security paradigm by assuming that no user, device, or system can be trusted by default.
This paper explores the key components of ZTA, such as identity and access management (IAM), micro-segmentation, continuous monitoring, and behavioral analytics.
arXiv Detail & Related papers (2024-10-23T21:53:16Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.