Explaining Risks: Axiomatic Risk Attributions for Financial Models
- URL: http://arxiv.org/abs/2506.06653v1
- Date: Sat, 07 Jun 2025 04:15:27 GMT
- Title: Explaining Risks: Axiomatic Risk Attributions for Financial Models
- Authors: Dangxing Chen,
- Abstract summary: In recent years, machine learning models have achieved great success at the expense of highly complex black-box structures.<n>In high-risk sectors such as finance, risk is just as important as mean predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, machine learning models have achieved great success at the expense of highly complex black-box structures. By using axiomatic attribution methods, we can fairly allocate the contributions of each feature, thus allowing us to interpret the model predictions. In high-risk sectors such as finance, risk is just as important as mean predictions. Throughout this work, we address the following risk attribution problem: how to fairly allocate the risk given a model with data? We demonstrate with analysis and empirical examples that risk can be well allocated by extending the Shapley value framework.
Related papers
- Mapping AI Benchmark Data to Quantitative Risk Estimates Through Expert Elicitation [0.7889270818022226]
We show how existing AI benchmarks can be used to facilitate the creation of risk estimates.<n>We describe the results of a pilot study in which experts use information from Cybench, an AI benchmark, to generate probability estimates.
arXiv Detail & Related papers (2025-03-06T10:39:47Z) - Data-Centric AI Governance: Addressing the Limitations of Model-Focused Policies [40.92400015183777]
Current regulations on powerful AI capabilities are narrowly focused on "foundation" or "frontier" models.
These terms are vague and inconsistently defined, leading to an unstable foundation for governance efforts.
In this work, we illustrate the importance of considering dataset size and content as essential factors in assessing the risks posed by models.
arXiv Detail & Related papers (2024-09-25T17:59:01Z) - Attribution Methods in Asset Pricing: Do They Account for Risk? [5.9007954155974645]
We present and study several axioms derived from asset pricing domain knowledge.
It is shown that while Shapley value and Integrated Gradients preserve most axioms, neither can satisfy all axioms.
Using extensive analytical and empirical examples, we demonstrate how attribution methods can reflect risks and when they should not be used.
arXiv Detail & Related papers (2024-07-12T03:16:54Z) - Towards Probing Speech-Specific Risks in Large Multimodal Models: A Taxonomy, Benchmark, and Insights [50.89022445197919]
We propose a speech-specific risk taxonomy, covering 8 risk categories under hostility (malicious sarcasm and threats), malicious imitation (age, gender, ethnicity), and stereotypical biases (age, gender, ethnicity)
Based on the taxonomy, we create a small-scale dataset for evaluating current LMMs capability in detecting these categories of risk.
arXiv Detail & Related papers (2024-06-25T10:08:45Z) - CRiskEval: A Chinese Multi-Level Risk Evaluation Benchmark Dataset for Large Language Models [46.93425758722059]
CRiskEval is a Chinese dataset meticulously designed for gauging the risk proclivities inherent in large language models (LLMs)
We define a new risk taxonomy with 7 types of frontier risks and 4 safety levels, including extremely hazardous,moderately hazardous, neutral and safe.
The dataset consists of 14,888 questions that simulate scenarios related to predefined 7 types of frontier risks.
arXiv Detail & Related papers (2024-06-07T08:52:24Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - On the Societal Impact of Open Foundation Models [93.67389739906561]
We focus on open foundation models, defined here as those with broadly available model weights.
We identify five distinctive properties of open foundation models that lead to both their benefits and risks.
arXiv Detail & Related papers (2024-02-27T16:49:53Z) - C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models [57.10361282229501]
We propose C-RAG, the first framework to certify generation risks for RAG models.
Specifically, we provide conformal risk analysis for RAG models and certify an upper confidence bound of generation risks.
We prove that RAG achieves a lower conformal generation risk than that of a single LLM when the quality of the retrieval model and transformer is non-trivial.
arXiv Detail & Related papers (2024-02-05T16:46:16Z) - DeRisk: An Effective Deep Learning Framework for Credit Risk Prediction
over Real-World Financial Data [13.480823015283574]
We propose DeRisk, an effective deep learning risk prediction framework for credit risk prediction on real-world financial data.
DeRisk is the first deep risk prediction model that outperforms statistical learning approaches deployed in our company's production system.
arXiv Detail & Related papers (2023-08-07T16:22:59Z) - Distributional Model Equivalence for Risk-Sensitive Reinforcement
Learning [20.449497882324785]
We leverage distributional reinforcement learning to introduce two new notions of model equivalence.
We demonstrate how our framework can be used to augment any model-free risk-sensitive algorithm.
arXiv Detail & Related papers (2023-07-04T13:23:21Z) - Modeling Multivariate Cyber Risks: Deep Learning Dating Extreme Value
Theory [6.451038884092264]
The proposed model enjoys the high accurate point predictions via deep learning and high quantile prediction via extreme value theory.
The empirical evidence based on real honeypot attack data also shows that the proposed model has very satisfactory prediction performances.
arXiv Detail & Related papers (2021-03-15T15:18:53Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.