Multi-Scenario Empirical Assessment of Agile Governance Theory: A
Technical Report
- URL: http://arxiv.org/abs/2307.13635v1
- Date: Mon, 3 Jul 2023 18:50:36 GMT
- Title: Multi-Scenario Empirical Assessment of Agile Governance Theory: A
Technical Report
- Authors: Alexandre J. H. de O. Luna, Marcelo L. M. Marinho
- Abstract summary: Agile Governance Theory (AGT) has emerged as a potential model for organizational chains of responsibility across business units and teams.
This study aims to assess how AGT is reflected in practice.
- Score: 55.2480439325792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context: Agile Governance Theory (AGT) has emerged as a potential model for
organizational chains of responsibility across business units and teams.
Objective: This study aims to assess how AGT is reflected in practice. Method:
AGT was operationalized down into 16 testable hypotheses. All hypotheses were
tested by arranging eight theoretical scenarios with 118 practitioners from 86
organizations and 19 countries who completed an in-depth explanatory
scenario-based survey. The feedback results were analyzed using Structural
Equation Modeling (SEM) and Confirmatory Factor Analysis (CFA). Results: The
analyses supported key theory components and hypotheses, such as mediation
between agile capabilities and business operations through governance
capabilities. Conclusion: This study supports the theory and suggests that AGT
can assist teams in gaining a better understanding of their organization
governance in an agile context. A better understanding can help remove delays
and misunderstandings that can come about with unclear decision-making
channels, which can jeopardize the fulfillment of the overall strategy.
Related papers
- RL-STaR: Theoretical Analysis of Reinforcement Learning Frameworks for Self-Taught Reasoner [2.779063752888881]
Self-taught reasoner (STaR) framework uses reinforcement learning to automatically generate reasoning steps.
STaR and its variants have demonstrated empirical success, but a theoretical foundation explaining these improvements is lacking.
This work provides a theoretical framework for understanding the effectiveness of reinforcement learning on CoT reasoning and STaR.
arXiv Detail & Related papers (2024-10-31T13:17:53Z) - Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization [94.31508613367296]
Retrieval-augmented generation (RAG) is a key means to effectively enhance large language models (LLMs)
We propose StructRAG, which can identify the optimal structure type for the task at hand, reconstruct original documents into this structured format, and infer answers based on the resulting structure.
Experiments show that StructRAG achieves state-of-the-art performance, particularly excelling in challenging scenarios.
arXiv Detail & Related papers (2024-10-11T13:52:44Z) - RATT: A Thought Structure for Coherent and Correct LLM Reasoning [23.28162642780579]
We introduce the Retrieval Augmented Thought Tree (RATT), a novel thought structure that considers both overall logical soundness and factual correctness at each step of the thinking process.
A range of experiments on different types of tasks showcases that the RATT structure significantly outperforms existing methods in factual correctness and logical coherence.
arXiv Detail & Related papers (2024-06-04T20:02:52Z) - Self-Discover: Large Language Models Self-Compose Reasoning Structures [136.48389510481758]
We introduce SELF-DISCOVER, a framework for self-discovering task-intrinsic reasoning structures.
SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks.
We show that the self-discovered reasoning structures are universally applicable across model families.
arXiv Detail & Related papers (2024-02-06T01:13:53Z) - Understanding What Affects the Generalization Gap in Visual Reinforcement Learning: Theory and Empirical Evidence [53.51724434972605]
This paper theoretically answers the key factors that contribute to the generalization gap when the testing environment has distractors.
Our theories indicate that minimizing the representation distance between training and testing environments, which aligns with human intuition, is the most critical for the benefit of reducing the generalization gap.
arXiv Detail & Related papers (2024-02-05T03:27:52Z) - A Principled Framework for Knowledge-enhanced Large Language Model [58.1536118111993]
Large Language Models (LLMs) are versatile, yet they often falter in tasks requiring deep and reliable reasoning.
This paper introduces a rigorously designed framework for creating LLMs that effectively anchor knowledge and employ a closed-loop reasoning process.
arXiv Detail & Related papers (2023-11-18T18:10:02Z) - Generalizing Goal-Conditioned Reinforcement Learning with Variational
Causal Reasoning [24.09547181095033]
Causal Graph is a structure built upon the relation between objects and events.
We propose a framework with theoretical performance guarantees that alternates between two steps.
Our performance improvement is attributed to the virtuous cycle of causal discovery, transition modeling, and policy training.
arXiv Detail & Related papers (2022-07-19T05:31:16Z) - Principles to Practices for Responsible AI: Closing the Gap [0.1749935196721634]
We argue that an impact assessment framework is a promising approach to close the principles-to-practices gap.
We review a case study of AI's use in forest ecosystem restoration, demonstrating how an impact assessment framework can translate into effective and responsible AI practices.
arXiv Detail & Related papers (2020-06-08T16:04:44Z) - Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality
Assessment in Natural Language Processing [6.654552816487819]
We present GAQCorpus: the first large-scale English multi-domain (community Q&A forums, debate forums, review forums) corpus annotated with theory-based AQ scores.
We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.
arXiv Detail & Related papers (2020-06-01T10:39:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.