Seeing Through Risk: A Symbolic Approximation of Prospect Theory
- URL: http://arxiv.org/abs/2504.14448v1
- Date: Sun, 20 Apr 2025 01:44:54 GMT
- Title: Seeing Through Risk: A Symbolic Approximation of Prospect Theory
- Authors: Ali Arslan Yousaf, Umair Rehman, Muhammad Umair Danish,
- Abstract summary: We propose a novel symbolic modeling framework for decision-making under risk.<n>Our approach replaces opaque utility curves and probability weighting functions with transparent, effect-size-guided features.<n>We mathematically formalize the method, demonstrate its ability to replicate well-known framing and loss-aversion phenomena, and provide an end-to-end empirical validation on synthetic datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel symbolic modeling framework for decision-making under risk that merges interpretability with the core insights of Prospect Theory. Our approach replaces opaque utility curves and probability weighting functions with transparent, effect-size-guided features. We mathematically formalize the method, demonstrate its ability to replicate well-known framing and loss-aversion phenomena, and provide an end-to-end empirical validation on synthetic datasets. The resulting model achieves competitive predictive performance while yielding clear coefficients mapped onto psychological constructs, making it suitable for applications ranging from AI safety to economic policy analysis.
Related papers
- Exploring Energy Landscapes for Minimal Counterfactual Explanations: Applications in Cybersecurity and Beyond [3.6963146054309597]
Counterfactual explanations have emerged as a prominent method in Explainable Artificial Intelligence (XAI)<n>We present a novel framework that integrates perturbation theory and statistical mechanics to generate minimal counterfactual explanations.<n>Our approach systematically identifies the smallest modifications required to change a model's prediction while maintaining plausibility.
arXiv Detail & Related papers (2025-03-23T19:48:37Z) - From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks [4.293083690039339]
We formalize and characterize the risks and inherent complexity of model reconstruction.<n>We present the first formal analysis of model extraction attacks through the lens of competitive analysis.<n>We introduce novel reconstruction algorithms that achieve provably perfect fidelity while demonstrating strong anytime performance.
arXiv Detail & Related papers (2025-02-07T20:51:06Z) - Fair Risk Minimization under Causal Path-Specific Effect Constraints [3.0232957374216953]
This paper introduces a framework for estimating fair optimal predictions using machine learning.
We derive closed-form solutions for constrained optimization based on mean squared error and cross-entropy risk criteria.
arXiv Detail & Related papers (2024-08-03T02:05:43Z) - Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization [29.24821214671497]
Training machine learning and statistical models often involve optimizing a data-driven risk criterion.
We propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences.
For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations.
arXiv Detail & Related papers (2024-01-28T21:19:15Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - A Novel Neural-symbolic System under Statistical Relational Learning [47.30190559449236]
We propose a neural-symbolic framework based on statistical relational learning, referred to as NSF-SRL.<n>Results of symbolic reasoning are utilized to refine and correct the predictions made by deep learning models, while deep learning models enhance the efficiency of the symbolic reasoning process.<n>We believe that this approach sets a new standard for neural-symbolic systems and will drive future research in the field of general artificial intelligence.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Planning with Diffusion for Flexible Behavior Synthesis [125.24438991142573]
We consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem.
The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories.
arXiv Detail & Related papers (2022-05-20T07:02:03Z) - Learnability of Competitive Threshold Models [11.005966612053262]
We study the learnability of the competitive threshold model from a theoretical perspective.
We demonstrate how competitive threshold models can be seamlessly simulated by artificial neural networks.
arXiv Detail & Related papers (2022-05-08T01:11:51Z) - Reachability analysis in stochastic directed graphs by reinforcement
learning [67.87998628083218]
We show that the dynamics of the transition probabilities in a Markov digraph can be modeled via a difference inclusion.
We offer a methodology to design reward functions to provide upper and lower bounds on the reachability probabilities of a set of nodes.
arXiv Detail & Related papers (2022-02-25T08:20:43Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.