Insights into Fairness through Trust: Multi-scale Trust Quantification
for Financial Deep Learning
- URL: http://arxiv.org/abs/2011.01961v1
- Date: Tue, 3 Nov 2020 19:05:07 GMT
- Title: Insights into Fairness through Trust: Multi-scale Trust Quantification
for Financial Deep Learning
- Authors: Alexander Wong, Andrew Hryniowski, and Xiao Yu Wang
- Abstract summary: A fundamental aspect of fairness that has not been explored in financial deep learning is the concept of trust.
We conduct multi-scale trust quantification on a deep neural network for the purpose of credit card default prediction.
- Score: 94.65749466106664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of deep learning in recent years have led to a significant
increase in interest and prevalence for its adoption to tackle financial
services tasks. One particular question that often arises as a barrier to
adopting deep learning for financial services is whether the developed
financial deep learning models are fair in their predictions, particularly in
light of strong governance and regulatory compliance requirements in the
financial services industry. A fundamental aspect of fairness that has not been
explored in financial deep learning is the concept of trust, whose variations
may point to an egocentric view of fairness and thus provide insights into the
fairness of models. In this study we explore the feasibility and utility of a
multi-scale trust quantification strategy to gain insights into the fairness of
a financial deep learning model, particularly under different scenarios at
different scales. More specifically, we conduct multi-scale trust
quantification on a deep neural network for the purpose of credit card default
prediction to study: 1) the overall trustworthiness of the model 2) the trust
level under all possible prediction-truth relationships, 3) the trust level
across the spectrum of possible predictions, 4) the trust level across
different demographic groups (e.g., age, gender, and education), and 5)
distribution of overall trust for an individual prediction scenario. The
insights for this proof-of-concept study demonstrate that such a multi-scale
trust quantification strategy may be helpful for data scientists and regulators
in financial services as part of the verification and certification of
financial deep learning solutions to gain insights into fairness and trust of
these solutions.
Related papers
- Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study [51.19622266249408]
MultiTrust is the first comprehensive and unified benchmark on the trustworthiness of MLLMs.
Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts.
Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks.
arXiv Detail & Related papers (2024-06-11T08:38:13Z) - U-Trustworthy Models.Reliability, Competence, and Confidence in
Decision-Making [0.21756081703275998]
We present a precise mathematical definition of trustworthiness, termed $mathcalU$-trustworthiness.
Within the context of $mathcalU$-trustworthiness, we prove that properly-ranked models are inherently $mathcalU$-trustworthy.
We advocate for the adoption of the AUC metric as the preferred measure of trustworthiness.
arXiv Detail & Related papers (2024-01-04T04:58:02Z) - Calibrating Multimodal Learning [94.65232214643436]
We propose a novel regularization technique, i.e., Calibrating Multimodal Learning (CML) regularization, to calibrate the predictive confidence of previous methods.
This technique could be flexibly equipped by existing models and improve the performance in terms of confidence calibration, classification accuracy, and model robustness.
arXiv Detail & Related papers (2023-06-02T04:29:57Z) - Trustworthy Federated Learning: A Survey [0.5089078998562185]
Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI)
We provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy.
We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy.
arXiv Detail & Related papers (2023-05-19T09:11:26Z) - Measuring Consistency in Text-based Financial Forecasting Models [10.339586273664725]
FinTrust is an evaluation tool that assesses logical consistency in financial text.
We show that the consistency of state-of-the-art NLP models for financial forecasting is poor.
Our analysis of the performance degradation caused by meaning-preserving alternations suggests that current text-based methods are not suitable for robustly predicting market information.
arXiv Detail & Related papers (2023-05-15T10:32:26Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - MACEst: The reliable and trustworthy Model Agnostic Confidence Estimator [0.17188280334580192]
We argue that any confidence estimates based upon standard machine learning point prediction algorithms are fundamentally flawed.
We present MACEst, a Model Agnostic Confidence Estimator, which provides reliable and trustworthy confidence estimates.
arXiv Detail & Related papers (2021-09-02T14:34:06Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.