Beyond Interaction Effects: Two Logics for Studying Population Inequalities
- URL: http://arxiv.org/abs/2601.04223v1
- Date: Fri, 26 Dec 2025 19:25:39 GMT
- Title: Beyond Interaction Effects: Two Logics for Studying Population Inequalities
- Authors: Adel Daoud,
- Abstract summary: We show that the choice between deduction and induction reflects a tradeoff between interpretability and flexibility.<n>Our framework is particularly relevant for inequality research, where understanding how treatment effects vary across social subpopulation is substantively central.
- Score: 2.710807780228189
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: When sociologists and other social scientist ask whether the return to college differs by race and gender, they face a choice between two fundamentally different modes of inquiry. Traditional interaction models follow deductive logic: the researcher specifies which variables moderate effects and tests these hypotheses. Machine learning methods follow inductive logic: algorithms search across vast combinatorial spaces to discover patterns of heterogeneity. This article develops a framework for navigating between these approaches. We show that the choice between deduction and induction reflects a tradeoff between interpretability and flexibility, and we demonstrate through simulation when each approach excels. Our framework is particularly relevant for inequality research, where understanding how treatment effects vary across intersecting social subpopulation is substantively central.
Related papers
- Physics-based phenomenological characterization of cross-modal bias in multimodal models [11.525886296936413]
multimodal large language models (MLLMs) are breaking new ground in multimodal understanding, reasoning, and generation.<n>We argue that inconspicuous distortions arising from complex multimodal interaction dynamics can lead to systematic bias.
arXiv Detail & Related papers (2026-02-24T07:21:08Z) - Dynamics Within Latent Chain-of-Thought: An Empirical Study of Causal Structure [58.89643769707751]
We study latent chain-of-thought as a manipulable causal process in representation space.<n>We find that latent-step budgets behave less like homogeneous extra depth and more like staged functionality with non-local routing.<n>These results motivate mode-conditional and stability-aware analyses as more reliable tools for interpreting and improving latent reasoning systems.
arXiv Detail & Related papers (2026-02-09T15:25:12Z) - Schoenfeld's Anatomy of Mathematical Reasoning by Language Models [56.656180566692946]
We adopt Schoenfeld's Episode Theory as an inductive, intermediate-scale lens and introduce ThinkARM (Anatomy of Reasoning in Models)<n>ThinkARM explicitly abstracts reasoning traces into functional reasoning steps such as Analysis, Explore, Implement, verify, etc.<n>We show that episode-level representations make reasoning steps explicit, enabling systematic analysis of how reasoning is structured, stabilized, and altered in modern language models.
arXiv Detail & Related papers (2025-12-23T02:44:25Z) - LogiDynamics: Unraveling the Dynamics of Inductive, Abductive and Deductive Logical Inferences in LLM Reasoning [74.0242521818214]
This paper systematically investigates the comparative dynamics of inductive (System 1) versus abductive/deductive (System 2) inference in large language models (LLMs)<n>We utilize a controlled analogical reasoning environment, varying modality (textual, visual, symbolic), difficulty, and task format (MCQ / free-text)<n>Our analysis reveals System 2 pipelines generally excel, particularly in visual/symbolic modalities and harder tasks, while System 1 is competitive for textual and easier problems.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Argumentation and Machine Learning [4.064849471241967]
This chapter provides an overview of research works that present approaches with some degree of cross-fertilisation between Computational Argumentation and Machine Learning.
Two broad themes representing the purpose of the interaction between these two areas were identified.
We evaluate the spectrum of works across various dimensions, including the type of learning and the form of argumentation framework used.
arXiv Detail & Related papers (2024-10-31T08:19:58Z) - The Cognitive Revolution in Interpretability: From Explaining Behavior to Interpreting Representations and Algorithms [3.3653074379567096]
mechanistic interpretability (MI) has emerged as a distinct research area studying the features and implicit algorithms learned by foundation models such as large language models.
We argue that current methods are ripe to facilitate a transition in deep learning interpretation echoing the "cognitive revolution" in 20th-century psychology.
We propose a taxonomy mirroring key parallels in computational neuroscience to describe two broad categories of MI research.
arXiv Detail & Related papers (2024-08-11T20:50:16Z) - Reconciling Heterogeneous Effects in Causal Inference [44.99833362998488]
We apply the Reconcile algorithm for model multiplicity in machine learning to reconcile heterogeneous effects in causal inference.
Our results have tangible implications for ensuring fair outcomes in high-stakes such as healthcare, insurance, and housing.
arXiv Detail & Related papers (2024-06-05T18:43:46Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z) - Understanding the wiring evolution in differentiable neural architecture
search [114.31723873105082]
Controversy exists on whether differentiable neural architecture search methods discover wiring topology effectively.
We study the underlying mechanism of several existing differentiable NAS frameworks.
arXiv Detail & Related papers (2020-09-02T18:08:34Z) - On the Relationship Between Active Inference and Control as Inference [62.997667081978825]
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem.
arXiv Detail & Related papers (2020-06-23T13:03:58Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z) - Neural Analogical Matching [8.716086137563243]
The importance of analogy to humans has made it an active area of research in the broader field of artificial intelligence.
We introduce the Analogical Matching Network, a neural architecture that learns to produce analogies between structured, symbolic representations.
arXiv Detail & Related papers (2020-04-07T17:50:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.