RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
by enhancing SHapley Additive exPlanations
- URL: http://arxiv.org/abs/2209.09157v1
- Date: Mon, 19 Sep 2022 16:23:43 GMT
- Title: RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
by enhancing SHapley Additive exPlanations
- Authors: Ricardo M\"uller, Marco Schreyer, Timur Sattarov, Damian Borth
- Abstract summary: We propose (RESHAPE) which explains the model output on an aggregated attribute-level.
Our results show empirical evidence that RESHAPE results in versatile explanations compared to state-of-the-art baselines.
We envision such attribute-level explanations as a necessary next step in the adoption of unsupervised DL techniques in financial auditing.
- Score: 1.3333957453318743
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Detecting accounting anomalies is a recurrent challenge in financial
statement audits. Recently, novel methods derived from Deep-Learning (DL) have
been proposed to audit the large volumes of a statement's underlying accounting
records. However, due to their vast number of parameters, such models exhibit
the drawback of being inherently opaque. At the same time, the concealing of a
model's inner workings often hinders its real-world application. This
observation holds particularly true in financial audits since auditors must
reasonably explain and justify their audit decisions. Nowadays, various
Explainable AI (XAI) techniques have been proposed to address this challenge,
e.g., SHapley Additive exPlanations (SHAP). However, in unsupervised DL as
often applied in financial audits, these methods explain the model output at
the level of encoded variables. As a result, the explanations of Autoencoder
Neural Networks (AENNs) are often hard to comprehend by human auditors. To
mitigate this drawback, we propose (RESHAPE), which explains the model output
on an aggregated attribute-level. In addition, we introduce an evaluation
framework to compare the versatility of XAI methods in auditing. Our
experimental results show empirical evidence that RESHAPE results in versatile
explanations compared to state-of-the-art baselines. We envision such
attribute-level explanations as a necessary next step in the adoption of
unsupervised DL techniques in financial auditing.
Related papers
- Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - Language Model Cascades: Token-level uncertainty and beyond [65.38515344964647]
Recent advances in language models (LMs) have led to significant improvements in quality on complex NLP tasks.
Cascading offers a simple strategy to achieve more favorable cost-quality tradeoffs.
We show that incorporating token-level uncertainty through learned post-hoc deferral rules can significantly outperform simple aggregation strategies.
arXiv Detail & Related papers (2024-04-15T21:02:48Z) - Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Federated and Privacy-Preserving Learning of Accounting Data in
Financial Statement Audits [1.4986031916712106]
We propose a Federated Learning framework to train DL models on auditing relevant accounting data of multiple clients.
We evaluate our approach to detect accounting anomalies in three real-world datasets of city payments.
arXiv Detail & Related papers (2022-08-26T15:09:18Z) - Relational Action Bases: Formalization, Effective Safety Verification,
and Invariants (Extended Version) [67.99023219822564]
We introduce the general framework of relational action bases (RABs)
RABs generalize existing models by lifting both restrictions.
We demonstrate the effectiveness of this approach on a benchmark of data-aware business processes.
arXiv Detail & Related papers (2022-08-12T17:03:50Z) - XAudit : A Theoretical Look at Auditing with Explanations [29.55309950026882]
This work formalizes the role of explanations in auditing and investigates if and how model explanations can help audits.
Specifically, we propose explanation-based algorithms for auditing linear classifiers and decision trees for feature sensitivity.
Our results illustrate that Counterfactual explanations are extremely helpful for auditing.
arXiv Detail & Related papers (2022-06-09T19:19:58Z) - Continual Learning for Unsupervised Anomaly Detection in Continuous
Auditing of Financial Accounting Data [1.9659095632676094]
International audit standards require the direct assessment of a financial statement's underlying accounting journal entries.
Deep-learning inspired audit techniques emerged to examine vast quantities of journal entry data.
This work proposes a continual anomaly detection framework to overcome both challenges and designed to learn from a stream of journal entry data experiences.
arXiv Detail & Related papers (2021-12-25T09:21:14Z) - Multi-view Contrastive Self-Supervised Learning of Accounting Data
Representations for Downstream Audit Tasks [1.9659095632676094]
International audit standards require the direct assessment of a financial statement's underlying accounting transactions, referred to as journal entries.
Deep learning inspired audit techniques have emerged in the field of auditing vast quantities of journal entry data.
We propose a contrastive self-supervised learning framework designed to learn audit task invariant accounting data representations.
arXiv Detail & Related papers (2021-09-23T08:16:31Z) - A new interpretable unsupervised anomaly detection method based on
residual explanation [47.187609203210705]
We present RXP, a new interpretability method to deal with the limitations for AE-based AD in large-scale systems.
It stands out for its implementation simplicity, low computational cost and deterministic behavior.
In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP.
arXiv Detail & Related papers (2021-03-14T15:35:45Z) - Learning Sampling in Financial Statement Audits using Vector Quantised
Autoencoder Neural Networks [1.2205797997133396]
We propose the application of Vector Quantised-Variational Autoencoder (VQ-VAE) neural networks.
We demonstrate, based on two real-world city payment datasets, that such artificial neural networks are capable of learning a quantised representation of accounting data.
arXiv Detail & Related papers (2020-08-06T09:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.