Subjective-objective policy making approach: Coupling of resident-values
multiple regression analysis with value-indices, multi-agent-based simulation
- URL: http://arxiv.org/abs/2306.08208v1
- Date: Wed, 14 Jun 2023 02:33:32 GMT
- Title: Subjective-objective policy making approach: Coupling of resident-values
multiple regression analysis with value-indices, multi-agent-based simulation
- Authors: Misa Owa, Junichi Miyakoshi, Takeshi Kato
- Abstract summary: This study proposes a new combined subjective-objective policy evaluation approach to choose better policy.
The proposed approach establishes a subjective target function based on a multiple regression analysis of the results of a residents questionnaire survey.
Using the new approach to compare several policies enables concrete expression of the will of stakeholders with diverse values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the concerns around the existing subjective and objective policy
evaluation approaches, this study proposes a new combined subjective-objective
policy evaluation approach to choose better policy that reflects the will of
citizens and is backed up by objective facts. Subjective approaches, such as
the Life Satisfaction Approach and the Contingent Valuation Method, convert
subjectivity into economic value, raising the question whether a higher
economic value really accords with what citizens want. Objective policy
evaluation approaches, such as Evidence Based Policy Making and
Multi-Agent-Based Simulation, do not take subjectivity into account, making it
difficult to choose from diverse and pluralistic candidate policies. The
proposed approach establishes a subjective target function based on a multiple
regression analysis of the results of a residents questionnaire survey, and
uses MABS to calculate the objective evaluation indices for a number of
candidate policies. Next, a new subjective-objective coupling target function,
combining the explanatory variables of the subjective target function with
objective evaluation indices, is set up, optimized to select the preferred
policies from numerous candidates. To evaluate this approach, we conducted a
verification of renewable energy introduction policies at Takaharu Town in
Miyazaki Prefecture, Japan. The results show a good potential for using a new
subjective-objective coupling target function to select policies consistent
with the residents values for well-being from 20,000 policy candidates for
social, ecological, and economic values obtained in MABS. Using the new
approach to compare several policies enables concrete expression of the will of
stakeholders with diverse values, and contributes to constructive discussions
and consensus-building.
Related papers
- OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators [13.408838970377035]
offline policy evaluation (OPE) allows us to evaluate and estimate a new sequential decision-making policy's performance.
We propose a new algorithm that adaptively blends a set of OPE estimators given a dataset without relying on an explicit selection using a statistical procedure.
Our work contributes to improving ease of use for a general-purpose, estimator-agnostic, off-policy evaluation framework for offline RL.
arXiv Detail & Related papers (2024-05-27T23:51:20Z) - Value Preferences Estimation and Disambiguation in Hybrid Participatory Systems [3.7846812749505134]
We envision a hybrid participatory system where participants make choices and provide motivations for those choices.
We focus on situations where a conflict is detected between participants' choices and motivations.
We propose methods for estimating value preferences while addressing detected inconsistencies by interacting with the participants.
arXiv Detail & Related papers (2024-02-26T17:16:28Z) - Well-being policy evaluation methodology based on WE pluralism [0.0]
This study shifts from pluralism based on objective indicators to conceptual pluralism that emphasizes subjective context.
By combining well-being and joint fact-finding on the narrow-wide WE consensus, the policy evaluation method is formulated.
arXiv Detail & Related papers (2023-05-08T06:51:43Z) - Reinforcement Learning with Heterogeneous Data: Estimation and Inference [84.72174994749305]
We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity.
We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class.
We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset.
arXiv Detail & Related papers (2022-01-31T20:58:47Z) - Identification of Subgroups With Similar Benefits in Off-Policy Policy
Evaluation [60.71312668265873]
We develop a method to balance the need for personalization with confident predictions.
We show that our method can be used to form accurate predictions of heterogeneous treatment effects.
arXiv Detail & Related papers (2021-11-28T23:19:12Z) - Active Offline Policy Selection [19.18251239758809]
This paper addresses the problem of policy selection in domains with abundant logged data, but with a very restricted interaction budget.
Several off-policy evaluation (OPE) techniques have been proposed to assess the value of policies using only logged data.
We introduce a novel emphactive offline policy selection problem formulation, which combined logged data and limited online interactions to identify the best policy.
arXiv Detail & Related papers (2021-06-18T17:33:13Z) - Offline Policy Selection under Uncertainty [113.57441913299868]
We consider offline policy selection as learning preferences over a set of policy prospects given a fixed experience dataset.
Access to the full distribution over one's belief of the policy value enables more flexible selection algorithms under a wider range of downstream evaluation metrics.
We show how BayesDICE may be used to rank policies with respect to any arbitrary downstream policy selection metric.
arXiv Detail & Related papers (2020-12-12T23:09:21Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Confident Off-Policy Evaluation and Selection through Self-Normalized
Importance Weighting [15.985182419152197]
We propose a new method to compute a lower bound on the value of an arbitrary target policy.
The new approach is evaluated on a number of synthetic and real datasets and is found to be superior to its main competitors.
arXiv Detail & Related papers (2020-06-18T12:15:37Z) - Learning Goal-oriented Dialogue Policy with Opposite Agent Awareness [116.804536884437]
We propose an opposite behavior aware framework for policy learning in goal-oriented dialogues.
We estimate the opposite agent's policy from its behavior and use this estimation to improve the target agent by regarding it as part of the target policy.
arXiv Detail & Related papers (2020-04-21T03:13:44Z) - Efficient Policy Learning from Surrogate-Loss Classification Reductions [65.91730154730905]
We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning.
We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters.
We propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters.
arXiv Detail & Related papers (2020-02-12T18:54:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.