Improving City Life via Legitimate and Participatory Policy-making: A
Data-driven Approach in Switzerland
- URL: http://arxiv.org/abs/2306.13696v1
- Date: Fri, 23 Jun 2023 13:38:39 GMT
- Title: Improving City Life via Legitimate and Participatory Policy-making: A
Data-driven Approach in Switzerland
- Authors: Thomas Wellings and Srijoni Majumdar and Regula H\"anggli Fricker and
Evangelos Pournaras
- Abstract summary: This paper focuses on a case study of 1,204 citizens in the city of Aarau, Switzerland.
We analyze survey data containing insightful indicators that can impact the legitimacy of decision-making.
- Score: 2.5234156040689237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel data-driven approach to address challenges
faced by city policymakers concerning the distribution of public funds.
Providing budgeting processes for improving quality of life based on objective
(data-driven) evidence has been so far a missing element in policy-making. This
paper focuses on a case study of 1,204 citizens in the city of Aarau,
Switzerland, and analyzes survey data containing insightful indicators that can
impact the legitimacy of decision-making. Our approach is twofold. On the one
hand, we aim to optimize the legitimacy of policymakers' decisions by
identifying the level of investment in neighborhoods and projects that offer
the greatest return in legitimacy. To do so, we introduce a new
context-independent legitimacy metric for policymakers. This metric allows us
to distinguish decisive vs. indecisive collective preferences for neighborhoods
or projects on which to invest, enabling policymakers to prioritize impactful
bottom-up consultations and participatory initiatives (e.g., participatory
budgeting). The metric also allows policymakers to identify the optimal number
of investments in various project sectors and neighborhoods (in terms of
legitimacy gain). On the other hand, we aim to offer guidance to policymakers
concerning which satisfaction and participation factors influence citizens'
quality of life through an accurate classification model and an evaluation of
relocations. By doing so, policymakers may be able to further refine their
strategy, making targeted investments with significant benefits to citizens'
quality of life. These findings are expected to provide transformative insights
for practicing direct democracy in Switzerland and a blueprint for
policy-making to adopt worldwide.
Related papers
- Recommender Systems for Democracy: Toward Adversarial Robustness in Voting Advice Applications [18.95453617434051]
Voting advice applications (VAAs) help millions of voters understand which political parties or candidates best align with their views.<n>This paper explores the potential risks these applications pose to the democratic process when targeted by adversarial entities.
arXiv Detail & Related papers (2025-05-19T16:38:06Z) - Benchmarking LLMs for Political Science: A United Nations Perspective [34.000742556609126]
Large Language Models (LLMs) have achieved significant advances in natural language processing, yet their potential for high-stake political decision-making remains largely unexplored.
This paper addresses the gap by focusing on the application of LLMs to the United Nations (UN) decision-making process.
We introduce a novel dataset comprising publicly available UN Security Council (UNSC) records from 1994 to 2024, including draft resolutions, voting records, and diplomatic speeches.
arXiv Detail & Related papers (2025-02-19T21:51:01Z) - Policy Aggregation [21.21314301021803]
We consider the challenge of AI value alignment with multiple individuals with different reward functions and optimal policies in an underlying Markov decision process.
We formalize this problem as one of policy aggregation, where the goal is to identify a desirable collective policy.
Key insight is that social choice methods can be reinterpreted by identifying ordinal preferences with volumes of subsets of the state-action occupancy polytope.
arXiv Detail & Related papers (2024-11-06T04:19:50Z) - A framework for expected capability sets [1.3654846342364306]
We focus on cases where a policy maker chooses an act that, combined with a state of the world, leads to a set of choices for citizens.
We propose two procedures that merge the potential set of choices for each state of the world taking into account their respective likelihoods.
arXiv Detail & Related papers (2024-05-22T13:51:00Z) - Off-Policy Evaluation for Large Action Spaces via Policy Convolution [60.6953713877886]
Policy Convolution family of estimators uses latent structure within actions to strategically convolve the logging and target policies.
Experiments on synthetic and benchmark datasets demonstrate remarkable mean squared error (MSE) improvements when using PC.
arXiv Detail & Related papers (2023-10-24T01:00:01Z) - Subjective-objective policy making approach: Coupling of resident-values
multiple regression analysis with value-indices, multi-agent-based simulation [0.0]
This study proposes a new combined subjective-objective policy evaluation approach to choose better policy.
The proposed approach establishes a subjective target function based on a multiple regression analysis of the results of a residents questionnaire survey.
Using the new approach to compare several policies enables concrete expression of the will of stakeholders with diverse values.
arXiv Detail & Related papers (2023-06-14T02:33:32Z) - Conformal Off-Policy Evaluation in Markov Decision Processes [53.786439742572995]
Reinforcement Learning aims at identifying and evaluating efficient control policies from data.
Most methods for this learning task, referred to as Off-Policy Evaluation (OPE), do not come with accuracy and certainty guarantees.
We present a novel OPE method based on Conformal Prediction that outputs an interval containing the true reward of the target policy with a prescribed level of certainty.
arXiv Detail & Related papers (2023-04-05T16:45:11Z) - Generalizing Off-Policy Learning under Sample Selection Bias [15.733136147164032]
We propose a novel framework for learning policies that generalize to the target population.
We prove that, if the uncertainty set is well-specified, our policies generalize to the target population as they can not do worse than on the training data.
arXiv Detail & Related papers (2021-12-02T16:18:16Z) - Identification of Subgroups With Similar Benefits in Off-Policy Policy
Evaluation [60.71312668265873]
We develop a method to balance the need for personalization with confident predictions.
We show that our method can be used to form accurate predictions of heterogeneous treatment effects.
arXiv Detail & Related papers (2021-11-28T23:19:12Z) - Safe Policy Learning through Extrapolation: Application to Pre-trial
Risk Assessment [0.0]
We develop a robust optimization approach that partially identifies the expected utility of a policy, and then finds an optimal policy.
We extend this approach to common and important settings where humans make decisions with the aid of algorithmic recommendations.
We derive new classification and recommendation rules that retain the transparency and interpretability of the existing risk assessment instrument.
arXiv Detail & Related papers (2021-09-22T00:52:03Z) - Building a Foundation for Data-Driven, Interpretable, and Robust Policy
Design using the AI Economist [67.08543240320756]
We show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning and data-driven simulations.
We find that log-linear policies trained using RL significantly improve social welfare, based on both public health and economic outcomes, compared to past outcomes.
arXiv Detail & Related papers (2021-08-06T01:30:41Z) - Supervised Off-Policy Ranking [145.3039527243585]
Off-policy evaluation (OPE) leverages data generated by other policies to evaluate a target policy.
We propose supervised off-policy ranking that learns a policy scoring model by correctly ranking training policies with known performance.
Our method outperforms strong baseline OPE methods in terms of both rank correlation and performance gap between the truly best and the best of the ranked top three policies.
arXiv Detail & Related papers (2021-07-03T07:01:23Z) - Offline Policy Selection under Uncertainty [113.57441913299868]
We consider offline policy selection as learning preferences over a set of policy prospects given a fixed experience dataset.
Access to the full distribution over one's belief of the policy value enables more flexible selection algorithms under a wider range of downstream evaluation metrics.
We show how BayesDICE may be used to rank policies with respect to any arbitrary downstream policy selection metric.
arXiv Detail & Related papers (2020-12-12T23:09:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.