Large Language Models for Designing Participatory Budgeting Rules
- URL: http://arxiv.org/abs/2602.09349v2
- Date: Wed, 11 Feb 2026 19:55:50 GMT
- Title: Large Language Models for Designing Participatory Budgeting Rules
- Authors: Nguyen Thach, Xingchen Sha, Hau Chan,
- Abstract summary: Large language models (LLMs) have been increasingly employed for automated algorithmic design.<n>We introduce a novel framework, named LLMRule, that addresses the limitations of existing works by incorporating LLMs into an evolutionary search procedure.<n>Our results show that the LLM-generated rules generally outperform existing handcrafted rules in terms of overall utility while still maintaining a similar degree of fairness.
- Score: 16.29630050703722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Participatory budgeting (PB) is a democratic paradigm for deciding the funding of public projects given the residents' preferences, which has been adopted in numerous cities across the world. The main focus of PB is designing rules, functions that return feasible budget allocations for a set of projects subject to some budget constraint. Designing PB rules that optimize both utility and fairness objectives based on agent preferences had been challenging due to the extensive domain knowledge required and the proven trade-off between the two notions. Recently, large language models (LLMs) have been increasingly employed for automated algorithmic design. Given the resemblance of PB rules to algorithms for classical knapsack problems, in this paper, we introduce a novel framework, named LLMRule, that addresses the limitations of existing works by incorporating LLMs into an evolutionary search procedure for automating the design of PB rules. Our experimental results, evaluated on more than 600 real-world PB instances obtained from the U.S., Canada, Poland, and the Netherlands with different representations of agent preferences, demonstrate that the LLM-generated rules generally outperform existing handcrafted rules in terms of overall utility while still maintaining a similar degree of fairness.
Related papers
- Universal Legal Article Prediction via Tight Collaboration between Supervised Classification Model and LLM [42.11889345473452]
Legal Article Prediction (LAP) is a critical task in legal text classification.<n>We propose Uni-LAP, a universal framework for legal article prediction.
arXiv Detail & Related papers (2025-09-26T09:42:20Z) - Revisiting LLM Reasoning via Information Bottleneck [57.519119962528166]
Large language models (LLMs) have recently demonstrated remarkable progress in reasoning capabilities through reinforcement learning with verifiable rewards (RLVR)<n>We present a theoretical characterization of LLM reasoning grounded in information bottleneck (IB) principle.<n>We propose IB-aware reasoning optimization (IBRO), a framework that encourages reasoning trajectories to be both informative about the final correct answer and generalizable.
arXiv Detail & Related papers (2025-07-24T13:14:25Z) - PGPO: Enhancing Agent Reasoning via Pseudocode-style Planning Guided Preference Optimization [58.465778756331574]
We propose a pseudocode-style Planning Guided Preference Optimization method called PGPO for effective agent learning.<n>With two planning-oriented rewards, PGPO further enhances LLM agents' ability to generate high-quality P-code Plans.<n>Experiments show that PGPO achieves superior performance on representative agent benchmarks and outperforms the current leading baselines.
arXiv Detail & Related papers (2025-06-02T09:35:07Z) - Latent Preference Coding: Aligning Large Language Models via Discrete Latent Codes [54.93980123979578]
We introduce Latent Preference Coding (LPC), a novel framework that models the implicit factors as well as their combinations behind holistic preferences.<n>LPC seamlessly integrates with various offline alignment algorithms, automatically inferring the underlying factors and their importance from data.
arXiv Detail & Related papers (2025-05-08T06:59:06Z) - PIPA: Preference Alignment as Prior-Informed Statistical Estimation [57.24096291517857]
We introduce Pior-Informed Preference Alignment (PIPA), a unified, RL-free probabilistic framework.<n> PIPA accommodates both paired and unpaired data, as well as answer and step-level annotations.<n>By integrating different types of prior information, we developed two variations of PIPA: PIPA-M and PIPA-N.
arXiv Detail & Related papers (2025-02-09T04:31:30Z) - LLM-Generated Heuristics for AI Planning: Do We Even Need Domain-Independence Anymore? [87.71321254733384]
Large language models (LLMs) can generate planning approaches tailored to specific planning problems.<n>LLMs can achieve state-of-the-art performance on some standard IPC domains.<n>We discuss whether these results signify a paradigm shift and how they can complement existing planning approaches.
arXiv Detail & Related papers (2025-01-30T22:21:12Z) - Learning Aggregation Rules in Participatory Budgeting: A Data-Driven Approach [8.93709693107744]
Participatory Budgeting (PB) offers a democratic process for communities to allocate public funds across various projects through voting.<n>In practice, PB organizers face challenges in selecting aggregation rules either because they are not familiar with the literature or because no existing rule echoes their expectations.<n>This paper presents a novel data-driven approach utilizing machine learning to address this challenge.
arXiv Detail & Related papers (2024-12-01T18:13:27Z) - Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model [14.480267340831542]
Structure-aware Planning with an Accurate World Model (SWAP)<n>SWAP integrates structured knowledge representation with learned planning.<n>We evaluate SWAP across diverse reasoning-intensive benchmarks including math reasoning, logical reasoning, and coding tasks.
arXiv Detail & Related papers (2024-10-04T04:23:36Z) - Personalized Reinforcement Learning with a Budget of Policies [9.846353643883443]
Personalization in machine learning (ML) tailors models' decisions to the individual characteristics of users.
We propose a novel framework termed represented Markov Decision Processes (r-MDPs) that is designed to balance the need for personalization with the regulatory constraints.
In an r-MDP, we cater to a diverse user population, each with unique preferences, through interaction with a small set of representative policies.
We develop two deep reinforcement learning algorithms that efficiently solve r-MDPs.
arXiv Detail & Related papers (2024-01-12T11:27:55Z) - Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model
Collaboration [52.57055162778548]
Legal Judgment Prediction (LJP) has become an increasingly crucial task in Legal AI.
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
Recent advances in deep learning have enabled a variety of techniques to be used to solve the LJP task.
arXiv Detail & Related papers (2023-10-13T16:47:20Z) - Maxmin Participatory Budgeting [1.1602089225841632]
Participatory Budgeting (PB) is a popular voting method by which a limited budget is divided among a set of projects, based on the preferences of voters over the projects.
Egalitarianism, an important objective in PB, has not received much attention in the context of indivisible PB.
This paper addresses this gap through a detailed study of a natural egalitarian rule, Maxmin Participatory Budgeting (MPB)
arXiv Detail & Related papers (2022-04-29T07:45:44Z) - Participatory Budgeting with Project Groups [27.39571821668551]
We study a generalization of the standard approval-based model of participatory budgeting (PB)
We show that the problem is generally intractable and describe efficient exact algorithms for several special cases.
Our results could allow, e.g., municipalities to hold richer PB processes that are thematically and geographically inclusive.
arXiv Detail & Related papers (2020-12-09T18:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.