Towards Public Administration Research Based on Interpretable Machine Learning
- URL: http://arxiv.org/abs/2601.06205v1
- Date: Thu, 08 Jan 2026 11:48:10 GMT
- Title: Towards Public Administration Research Based on Interpretable Machine Learning
- Authors: Zhanyu Liu, Yang Yu,
- Abstract summary: The article delves into the fundamental principles of interpretable machine learning and its current applications in social science research.<n>The article explores the disciplinary value of interpretable machine learning within the field of public administration.<n>As a complement to traditional causal inference methods, interpretable machine learning ushers in a new era of credibility in quantitative research within the realm of public administration.
- Score: 8.921486397408342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal relationships play a pivotal role in research within the field of public administration. Ensuring reliable causal inference requires validating the predictability of these relationships, which is a crucial precondition. However, prediction has not garnered adequate attention within the realm of quantitative research in public administration and the broader social sciences. The advent of interpretable machine learning presents a significant opportunity to integrate prediction into quantitative research conducted in public administration. This article delves into the fundamental principles of interpretable machine learning while also examining its current applications in social science research. Building upon this foundation, the article further expounds upon the implementation process of interpretable machine learning, encompassing key aspects such as dataset construction, model training, model evaluation, and model interpretation. Lastly, the article explores the disciplinary value of interpretable machine learning within the field of public administration, highlighting its potential to enhance the generalization of inference, facilitate the selection of optimal explanations for phenomena, stimulate the construction of theoretical hypotheses, and provide a platform for the translation of knowledge. As a complement to traditional causal inference methods, interpretable machine learning ushers in a new era of credibility in quantitative research within the realm of public administration.
Related papers
- Towards a Mechanistic Understanding of Large Reasoning Models: A Survey of Training, Inference, and Failures [72.27391760972445]
Large Reasoning Models (LRMs) have pushed reasoning capabilities to new heights.<n>This paper organizes recent findings into three core dimensions: 1) training dynamics, 2) reasoning mechanisms, and 3) unintended behaviors.
arXiv Detail & Related papers (2026-01-11T08:48:46Z) - When Predictions Shape Reality: A Socio-Technical Synthesis of Performative Predictions in Machine Learning [1.3750624267664158]
This paper provides a comprehensive review of the literature on performative predictions.<n>We provide an overview of the primary mechanisms through which performativity manifests, present a typology of associated risks, and survey the proposed solutions.<n>Our primary contribution is the Performative Strength vs. Impact Matrix" assessment framework.
arXiv Detail & Related papers (2026-01-07T23:28:29Z) - The Quest for the Right Mediator: Surveying Mechanistic Interpretability Through the Lens of Causal Mediation Analysis [51.046457649151336]
We propose a perspective on interpretability research grounded in causal mediation analysis.<n>We describe the history and current state of interpretability taxonomized according to the types of causal units (mediators) employed.<n>We discuss the pros and cons of each mediator, providing insights as to when particular kinds of mediators and search methods are most appropriate.
arXiv Detail & Related papers (2024-08-02T17:51:42Z) - The Compute Divide in Machine Learning: A Threat to Academic
Contribution and Scrutiny? [1.0985060632689174]
We show that a compute divide has coincided with a reduced representation of academic-only research teams in compute intensive research topics.
To address the challenges arising from this trend, we recommend approaches aimed at thoughtfully expanding academic insights.
arXiv Detail & Related papers (2024-01-04T01:26:11Z) - Improving Prediction Performance and Model Interpretability through
Attention Mechanisms from Basic and Applied Research Perspectives [3.553493344868414]
This bulletin is based on the summary of the author's dissertation.
Deep learning models have much higher prediction performance than traditional machine learning models.
The specific prediction process is still difficult to interpret and/or explain.
arXiv Detail & Related papers (2023-03-24T16:24:08Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Statistical Foundation Behind Machine Learning and Its Impact on
Computer Vision [8.974457198386414]
This paper revisits the principle of uniform convergence in statistical learning, discusses how it acts as the foundation behind machine learning, and attempts to gain a better understanding of the essential problem that current deep learning algorithms are solving.
Using computer vision as an example domain in machine learning, the discussion shows that recent research trends in leveraging increasingly large-scale data to perform pre-training for representation learning are largely to reduce the discrepancy between a practically tractable empirical loss and its ultimately desired but intractable expected loss.
arXiv Detail & Related papers (2022-09-06T17:59:04Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.