Fairness in Multi-Agent Planning
- URL: http://arxiv.org/abs/2212.00506v2
- Date: Mon, 22 May 2023 10:55:25 GMT
- Title: Fairness in Multi-Agent Planning
- Authors: Alberto Pozanco, Daniel Borrajo
- Abstract summary: This paper adapts well-known fairness schemes to Multi-Agent Planning (MAP)
It introduces two novel approaches to generate cost-aware fair plans.
Empirical results in several standard MAP benchmarks show that these approaches outperform different baselines.
- Score: 2.7184224088243356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In cooperative Multi-Agent Planning (MAP), a set of goals has to be achieved
by a set of agents. Independently of whether they perform a pre-assignment of
goals to agents or they directly search for a solution without any goal
assignment, most previous works did not focus on a fair
distribution/achievement of goals by agents. This paper adapts well-known
fairness schemes to MAP, and introduces two novel approaches to generate
cost-aware fair plans. The first one solves an optimization problem to
pre-assign goals to agents, and then solves a centralized MAP task using that
assignment. The second one consists of a planning-based compilation that allows
solving the joint problem of goal assignment and planning while taking into
account the given fairness scheme. Empirical results in several standard MAP
benchmarks show that these approaches outperform different baselines. They also
show that there is no need to sacrifice much plan cost to generate fair plans.
Related papers
- Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos [48.15438373870542]
VidAssist is an integrated framework designed for zero/few-shot goal-oriented planning in instructional videos.
It employs a breadth-first search algorithm for optimal plan generation.
Experiments demonstrate that VidAssist offers a unified framework for different goal-oriented planning setups.
arXiv Detail & Related papers (2024-09-30T17:57:28Z) - Meta-Task Planning for Language Agents [13.550774629515843]
Large language model-based agents (LLM agents) have emerged as a promising paradigm for achieving artificial general intelligence (AGI)
This paper introduces Meta-Task Planning (MTP), a zero-shot methodology for collaborative LLM-based multi-agent systems.
MTP achieved an average $sim40%$ success rate on TravelPlanner, significantly higher than the state-of-the-art (SOTA) baseline.
arXiv Detail & Related papers (2024-05-26T10:33:17Z) - TwoStep: Multi-agent Task Planning using Classical Planners and Large Language Models [7.653791106386385]
Two-agent planning goal decomposition leads to faster planning times than solving multi-agent PDDL problems directly.
We find that LLM-based approximations of subgoals can achieve similar multi-agent execution steps than those specified by human experts.
arXiv Detail & Related papers (2024-03-25T22:47:13Z) - On Computing Plans with Uniform Action Costs [10.621487250485897]
This paper adapts three uniformity metrics to automated planning, and introduces planning-based compilations that allow to lexicographically optimize sum of action costs and action costs uniformity.
Experimental results both in well-known and novel planning benchmarks show that the reformulated tasks can be effectively solved in practice to generate uniform plans.
arXiv Detail & Related papers (2024-02-15T11:00:28Z) - Optimal Task Assignment and Path Planning using Conflict-Based Search with Precedence and Temporal Constraints [5.265273282482319]
This paper examines the Task Assignment and Path Finding with Precedence and Temporal Constraints (TAPF-PTC) problem.
We augment Conflict-Based Search (CBS) to simultaneously generate task assignments and collision-free paths that adhere to precedence and temporal constraints.
Experimentally, we demonstrate that our algorithm, CBS-TA-PTC, can solve highly challenging bomb-defusing tasks with precedence and temporal constraints efficiently.
arXiv Detail & Related papers (2024-02-13T20:07:58Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Imitating Graph-Based Planning with Goal-Conditioned Policies [72.61631088613048]
We present a self-imitation scheme which distills a subgoal-conditioned policy into the target-goal-conditioned policy.
We empirically show that our method can significantly boost the sample-efficiency of the existing goal-conditioned RL methods.
arXiv Detail & Related papers (2023-03-20T14:51:10Z) - Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in
Latent Space [76.46113138484947]
General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments.
To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach goals for a wide range of tasks on command.
We propose Planning to Practice, a method that makes it practical to train goal-conditioned policies for long-horizon tasks.
arXiv Detail & Related papers (2022-05-17T06:58:17Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning [78.65083326918351]
We consider alternatives to an implicit sequential planning assumption.
We propose Divide-and-Conquer Monte Carlo Tree Search (DC-MCTS) for approximating the optimal plan.
We show that this algorithmic flexibility over planning order leads to improved results in navigation tasks in grid-worlds.
arXiv Detail & Related papers (2020-04-23T18:08:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.