On the Effectiveness of Minisum Approval Voting in an Open Strategy
Setting: An Agent-Based Approach
- URL: http://arxiv.org/abs/2009.04912v2
- Date: Fri, 25 Sep 2020 15:27:58 GMT
- Title: On the Effectiveness of Minisum Approval Voting in an Open Strategy
Setting: An Agent-Based Approach
- Authors: Joop van de Heijning, Stephan Leitner, Alexandra Rausch
- Abstract summary: This work researches the impact of including a wider range of participants in the strategy-making process on the performance of organizations.
Agent-based simulation demonstrates that the increased number of ideas generated from larger and diverse crowds and subsequent preference aggregation lead to rapid discovery of higher peaks in the organization's performance landscape.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work researches the impact of including a wider range of participants in
the strategy-making process on the performance of organizations which operate
in either moderately or highly complex environments. Agent-based simulation
demonstrates that the increased number of ideas generated from larger and
diverse crowds and subsequent preference aggregation lead to rapid discovery of
higher peaks in the organization's performance landscape. However, this is not
the case when the expansion in the number of participants is small. The results
confirm the most frequently mentioned benefit in the Open Strategy literature:
the discovery of better performing strategies.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Accelerating Task Generalisation with Multi-Level Hierarchical Options [1.6574413179773757]
Fracture Cluster Options (FraCOs) is a hierarchical reinforcement learning method that achieves state-of-the-art performance on difficult generalisation tasks.
We evaluate FraCOs against state-of-the-art deep reinforcement learning algorithms in several complex procedurally generated environments.
arXiv Detail & Related papers (2024-11-05T11:00:09Z) - Action abstractions for amortized sampling [49.384037138511246]
We propose an approach to incorporate the discovery of action abstractions, or high-level actions, into the policy optimization process.
Our approach involves iteratively extracting action subsequences commonly used across many high-reward trajectories and chunking' them into a single action that is added to the action space.
arXiv Detail & Related papers (2024-10-19T19:22:50Z) - Large Multimodal Agents: A Survey [78.81459893884737]
Large language models (LLMs) have achieved superior performance in powering text-based AI agents.
There is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain.
This review aims to provide valuable insights and guidelines for future research in this rapidly evolving field.
arXiv Detail & Related papers (2024-02-23T06:04:23Z) - AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents [76.95062553043607]
evaluating large language models (LLMs) is essential for understanding their capabilities and facilitating their integration into practical applications.
We introduce AgentBoard, a pioneering comprehensive benchmark and accompanied open-source evaluation framework tailored to analytical evaluation of LLM agents.
arXiv Detail & Related papers (2024-01-24T01:51:00Z) - Quantifying Agent Interaction in Multi-agent Reinforcement Learning for
Cost-efficient Generalization [63.554226552130054]
Generalization poses a significant challenge in Multi-agent Reinforcement Learning (MARL)
The extent to which an agent is influenced by unseen co-players depends on the agent's policy and the specific scenario.
We present the Level of Influence (LoI), a metric quantifying the interaction intensity among agents within a given scenario and environment.
arXiv Detail & Related papers (2023-10-11T06:09:26Z) - Enhance Multi-domain Sentiment Analysis of Review Texts through
Prompting Strategies [1.335032286337391]
We formulate the process of prompting for sentiment analysis tasks and introduce two novel strategies tailored for sentiment analysis.
We conduct comparative experiments on three distinct domain datasets to evaluate the effectiveness of the proposed sentiment analysis strategies.
The results demonstrate that the adoption of the proposed prompting strategies leads to a increasing enhancement in sentiment analysis accuracy.
arXiv Detail & Related papers (2023-09-05T08:44:23Z) - Strategically Efficient Exploration in Competitive Multi-agent
Reinforcement Learning [25.041622707261897]
This work seeks to understand the role of optimistic exploration in non-cooperative multi-agent settings.
We will show that, in zero-sum games, optimistic exploration can cause the learner to waste time sampling parts of the state space that are irrelevant to strategic play.
To address this issue, we introduce a formal notion of strategically efficient exploration in Markov games, and use this to develop two strategically efficient learning algorithms for finite Markov games.
arXiv Detail & Related papers (2021-07-30T15:22:59Z) - Heterogeneous Explore-Exploit Strategies on Multi-Star Networks [0.0]
We study a class of distributed bandit problems in which agents communicate over a multi-star network.
We propose new heterogeneous explore-exploit strategies using the multi-star as the model irregular network graph.
arXiv Detail & Related papers (2020-09-02T20:56:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.