Generating Plausible Counterfactual Explanations for Deep Transformers
in Financial Text Classification
- URL: http://arxiv.org/abs/2010.12512v1
- Date: Fri, 23 Oct 2020 16:29:26 GMT
- Title: Generating Plausible Counterfactual Explanations for Deep Transformers
in Financial Text Classification
- Authors: Linyi Yang, Eoin M. Kenny, Tin Lok James Ng, Yi Yang, Barry Smyth, and
Ruihai Dong
- Abstract summary: This paper proposes a novel methodology for producing plausible counterfactual explanations.
It also explores the regularization benefits of adversarial training on language models in the domain of FinTech.
- Score: 33.026285180536036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Corporate mergers and acquisitions (M&A) account for billions of dollars of
investment globally every year, and offer an interesting and challenging domain
for artificial intelligence. However, in these highly sensitive domains, it is
crucial to not only have a highly robust and accurate model, but be able to
generate useful explanations to garner a user's trust in the automated system.
Regrettably, the recent research regarding eXplainable AI (XAI) in financial
text classification has received little to no attention, and many current
methods for generating textual-based explanations result in highly implausible
explanations, which damage a user's trust in the system. To address these
issues, this paper proposes a novel methodology for producing plausible
counterfactual explanations, whilst exploring the regularization benefits of
adversarial training on language models in the domain of FinTech. Exhaustive
quantitative experiments demonstrate that not only does this approach improve
the model accuracy when compared to the current state-of-the-art and human
performance, but it also generates counterfactual explanations which are
significantly more plausible based on human trials.
Related papers
- Learning to Generate and Evaluate Fact-checking Explanations with Transformers [10.970249299147866]
Research contributes to the field of Explainable Artificial Antelligence (XAI)
We develop transformer-based fact-checking models that contextualise and justify their decisions by generating human-accessible explanations.
We emphasise the need for aligning Artificial Intelligence (AI)-generated explanations with human judgements.
arXiv Detail & Related papers (2024-10-21T06:22:51Z) - A Hypothesis on Good Practices for AI-based Systems for Financial Time
Series Forecasting: Towards Domain-Driven XAI Methods [0.0]
Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks.
These models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance.
This paper explores good practices for deploying explainability in AI-based systems for finance.
arXiv Detail & Related papers (2023-11-13T17:56:45Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Towards Responsible AI for Financial Transactions [0.0]
We provide an explanation for a deep neural network that is trained on a mixture of numerical, categorical and textual inputs for financial transaction classification.
We then test the robustness of the model by exposing it to a targeted evasion attack.
arXiv Detail & Related papers (2022-06-06T08:29:47Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Explainable AI in Credit Risk Management [0.0]
We implement two advanced explainability techniques called Local Interpretable Model Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to machine learning (ML)-based credit scoring models.
Specifically, we use LIME to explain instances locally and SHAP to get both local and global explanations.
We discuss the results in detail and present multiple comparison scenarios by using various kernels available for explaining graphs generated using SHAP values.
arXiv Detail & Related papers (2021-03-01T12:23:20Z) - Explainable AI for Interpretable Credit Scoring [0.8379286663107844]
Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application.
Regulations have added the need for model interpretability to ensure that algorithmic decisions are understandable coherent.
We present a credit scoring model that is both accurate and interpretable.
arXiv Detail & Related papers (2020-12-03T18:44:03Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.