Learning Aggregation Rules in Participatory Budgeting: A Data-Driven Approach
- URL: http://arxiv.org/abs/2412.01864v1
- Date: Sun, 01 Dec 2024 18:13:27 GMT
- Title: Learning Aggregation Rules in Participatory Budgeting: A Data-Driven Approach
- Authors: Roy Fairstein, Dan Vilenchik, Kobi Gal,
- Abstract summary: Participatory Budgeting (PB) offers a democratic process for communities to allocate public funds across various projects through voting.<n>In practice, PB organizers face challenges in selecting aggregation rules either because they are not familiar with the literature or because no existing rule echoes their expectations.<n>This paper presents a novel data-driven approach utilizing machine learning to address this challenge.
- Score: 8.93709693107744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Participatory Budgeting (PB) offers a democratic process for communities to allocate public funds across various projects through voting. In practice, PB organizers face challenges in selecting aggregation rules either because they are not familiar with the literature and the exact details of every existing rule or because no existing rule echoes their expectations. This paper presents a novel data-driven approach utilizing machine learning to address this challenge. By training neural networks on PB instances, our approach learns aggregation rules that balance social welfare, representation, and other societal beneficial goals. It is able to generalize from small-scale synthetic PB examples to large, real-world PB instances. It is able to learn existing aggregation rules but also generate new rules that adapt to diverse objectives, providing a more nuanced, compromise-driven solution for PB processes. The effectiveness of our approach is demonstrated through extensive experiments with synthetic and real-world PB data, and can expand the use and deployment of PB solutions.
Related papers
- Large Language Models for Designing Participatory Budgeting Rules [16.29630050703722]
Large language models (LLMs) have been increasingly employed for automated algorithmic design.<n>We introduce a novel framework, named LLMRule, that addresses the limitations of existing works by incorporating LLMs into an evolutionary search procedure.<n>Our results show that the LLM-generated rules generally outperform existing handcrafted rules in terms of overall utility while still maintaining a similar degree of fairness.
arXiv Detail & Related papers (2026-02-10T02:55:19Z) - Learning Branching Policies for MILPs with Proximal Policy Optimization [0.0]
Branch-and-Bound (B&B) is the dominant exact solution method for Mixed Linear Programs (MILP)<n>Current approaches rely on Imitation Learning (IL), which tends to overfit to expert demonstrations and struggles to generalize to structurally diverse or unseen instances.<n>In this work, we propose Tree-Gate Proximal Policy Optimization, a novel framework that employs Proximal Policy Optimization (PPO), a Reinforcement Learning (RL) algorithm, to train a branching policy.
arXiv Detail & Related papers (2025-11-17T05:16:14Z) - Accessible, Realistic, and Fair Evaluation of Positive-Unlabeled Learning Algorithms [54.58593451541316]
We propose the first PU learning benchmark to systematically compare PU learning algorithms.<n>We identify subtle yet critical factors that affect the realistic and fair evaluation of PU learning algorithms.
arXiv Detail & Related papers (2025-09-29T03:13:00Z) - DeepVoting: Learning Voting Rules with Tailored Embeddings [13.037431161285971]
We recast the problem of designing a good voting rule into one of learning probabilistic versions of voting rules.
We show that embeddings of preference profiles derived from the social choice literature allows us to learn existing voting rules more efficiently.
We also show that rules learned using embeddings can be tweaked to create novel voting rules with improved axiomatic properties.
arXiv Detail & Related papers (2024-08-24T17:15:20Z) - Personalized Reinforcement Learning with a Budget of Policies [9.846353643883443]
Personalization in machine learning (ML) tailors models' decisions to the individual characteristics of users.
We propose a novel framework termed represented Markov Decision Processes (r-MDPs) that is designed to balance the need for personalization with the regulatory constraints.
In an r-MDP, we cater to a diverse user population, each with unique preferences, through interaction with a small set of representative policies.
We develop two deep reinforcement learning algorithms that efficiently solve r-MDPs.
arXiv Detail & Related papers (2024-01-12T11:27:55Z) - Is Inverse Reinforcement Learning Harder than Standard Reinforcement
Learning? A Theoretical Perspective [55.36819597141271]
Inverse Reinforcement Learning (IRL) -- the problem of learning reward functions from demonstrations of an emphexpert policy -- plays a critical role in developing intelligent systems.
This paper provides the first line of efficient IRL in vanilla offline and online settings using samples and runtime.
As an application, we show that the learned rewards can emphtransfer to another target MDP with suitable guarantees.
arXiv Detail & Related papers (2023-11-29T00:09:01Z) - Abstracting Concept-Changing Rules for Solving Raven's Progressive
Matrix Problems [54.26307134687171]
Raven's Progressive Matrix (RPM) is a classic test to realize such ability in machine intelligence by selecting from candidates.
Recent studies suggest that solving RPM in an answer-generation way boosts a more in-depth understanding of rules.
We propose a deep latent variable model for Concept-changing Rule ABstraction (CRAB) by learning interpretable concepts and parsing concept-changing rules in the latent space.
arXiv Detail & Related papers (2023-07-15T07:16:38Z) - Goal-Conditioned Imitation Learning using Score-based Diffusion Policies [3.49482137286472]
We propose a new policy representation based on score-based diffusion models (SDMs)
We apply our new policy representation in the domain of Goal-Conditioned Imitation Learning (GCIL)
We show how BESO can even be used to learn a goal-independent policy from play-data usingintuitive-free guidance.
arXiv Detail & Related papers (2023-04-05T15:52:34Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - PBRE: A Rule Extraction Method from Trained Neural Networks Designed for
Smart Home Services [2.599882743586164]
PBRE is proposed to extract rules from learning methods to realize dynamic rule generation for smart home systems.
We also apply PBRE to extract rules from a smart home service represented by an NRL (Neural Network-based Reinforcement Learning)
arXiv Detail & Related papers (2022-07-18T05:19:24Z) - A Survey on Programmatic Weak Supervision [74.13976343129966]
We give brief introduction of the PWS learning paradigm and review representative approaches for each PWS's learning workflow.
We identify several critical challenges that remain underexplored in the area to hopefully inspire future directions in the field.
arXiv Detail & Related papers (2022-02-11T04:05:38Z) - Constructing a Good Behavior Basis for Transfer using Generalized Policy
Updates [63.58053355357644]
We study the problem of learning a good set of policies, so that when combined together, they can solve a wide variety of unseen reinforcement learning tasks.
We show theoretically that having access to a specific set of diverse policies, which we call a set of independent policies, can allow for instantaneously achieving high-level performance.
arXiv Detail & Related papers (2021-12-30T12:20:46Z) - Learning to Synthesize Programs as Interpretable and Generalizable
Policies [25.258598215642067]
We present a framework that learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner.
Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines.
arXiv Detail & Related papers (2021-08-31T07:03:06Z) - Scalable Bayesian Inverse Reinforcement Learning [93.27920030279586]
We introduce Approximate Variational Reward Imitation Learning (AVRIL)
Our method addresses the ill-posed nature of the inverse reinforcement learning problem.
Applying our method to real medical data alongside classic control simulations, we demonstrate Bayesian reward inference in environments beyond the scope of current methods.
arXiv Detail & Related papers (2021-02-12T12:32:02Z) - Hierarchical Variational Imitation Learning of Control Programs [131.7671843857375]
We propose a variational inference method for imitation learning of a control policy represented by parametrized hierarchical procedures (PHP)
Our method discovers the hierarchical structure in a dataset of observation-action traces of teacher demonstrations, by learning an approximate posterior distribution over the latent sequence of procedure calls and terminations.
We demonstrate a novel benefit of variational inference in the context of hierarchical imitation learning: in decomposing the policy into simpler procedures, inference can leverage acausal information that is unused by other methods.
arXiv Detail & Related papers (2019-12-29T08:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.