Transforming Triple-Entry Accounting with Machine Learning: A Path to Enhanced Transparency Through Analytics
- URL: http://arxiv.org/abs/2411.15190v1
- Date: Tue, 19 Nov 2024 08:58:44 GMT
- Title: Transforming Triple-Entry Accounting with Machine Learning: A Path to Enhanced Transparency Through Analytics
- Authors: Abraham Itzhak Weinberg, Alessio Faccia,
- Abstract summary: Triple Entry (TE) accounting could help improve transparency in complex financial and supply chain transactions such as blockchain.
Machine learning (ML) presents a promising avenue to augment the transparency advantages of TE accounting.
By automating some of the data collection and analysis needed for TE bookkeeping, ML techniques have the potential to make this more transparent.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Triple Entry (TE) is an accounting method that utilizes three accounts or 'entries' to record each transaction, rather than the conventional double-entry bookkeeping system. Existing studies have found that TE accounting, with its additional layer of verification and disclosure of inter-organizational relationships, could help improve transparency in complex financial and supply chain transactions such as blockchain. Machine learning (ML) presents a promising avenue to augment the transparency advantages of TE accounting. By automating some of the data collection and analysis needed for TE bookkeeping, ML techniques have the potential to make this more transparent accounting method scalable for large organizations with complex international supply chains, further enhancing the visibility and trustworthiness of financial reporting. By leveraging ML algorithms, anomalies within distributed ledger data can be swiftly identified, flagging potential instances of fraud or errors. Furthermore, by delving into transaction relationships over time, ML can untangle intricate webs of transactions, shedding light on obscured dealings and adding an investigative dimension. This paper aims to demonstrate the interaction between TE and ML and how they can leverage transparency levels.
Related papers
- GARG-AML against Smurfing: A Scalable and Interpretable Graph-Based Framework for Anti-Money Laundering [1.9461779294968458]
Money laundering is estimated to account for 2%-5% of the global GDP.<n>GARG-AML is a novel graph-based method that quantifies smurfing risk through a single interpretable metric.
arXiv Detail & Related papers (2025-06-04T11:30:37Z) - Invisible Tokens, Visible Bills: The Urgent Need to Audit Hidden Operations in Opaque LLM Services [22.700907666937177]
This position paper highlights emerging accountability challenges in commercial Opaque LLM Services (COLS)<n>We formalize two key risks: textitquantity inflation, where token and call counts may be artificially inflated, and textitquality downgrade, where providers might quietly substitute lower-cost models or tools.<n>We propose a modular three-layer auditing framework for COLS and users that enables trustworthy verification across execution, secure logging, and user-facing auditability without exposing proprietary internals.
arXiv Detail & Related papers (2025-05-24T02:26:49Z) - Learning on LLM Output Signatures for gray-box LLM Behavior Analysis [52.81120759532526]
Large Language Models (LLMs) have achieved widespread adoption, yet our understanding of their behavior remains limited.
We develop a transformer-based approach to process that theoretically guarantees approximation of existing techniques.
Our approach achieves superior performance on hallucination and data contamination detection in gray-box settings.
arXiv Detail & Related papers (2025-03-18T09:04:37Z) - Deep Learning Approaches for Anti-Money Laundering on Mobile Transactions: Review, Framework, and Directions [51.43521977132062]
Money laundering is a financial crime that obscures the origin of illicit funds.
The proliferation of mobile payment platforms and smart IoT devices has significantly complicated anti-money laundering investigations.
This paper conducts a comprehensive review of deep learning solutions and the challenges associated with their use in AML.
arXiv Detail & Related papers (2025-03-13T05:19:44Z) - Balancing Truthfulness and Informativeness with Uncertainty-Aware Instruction Fine-Tuning [79.48839334040197]
Instruction fine-tuning (IFT) can increase the informativeness of large language models (LLMs), but may reduce their truthfulness.<n>In this paper, we empirically demonstrate how unfamiliar knowledge in IFT datasets can negatively affect the truthfulness of LLMs.<n>We introduce two new IFT paradigms, $UNIT_cut$ and $UNIT_ref$, to address this issue.
arXiv Detail & Related papers (2025-02-17T16:10:30Z) - Federated Fine-Tuning of LLMs: Framework Comparison and Research Directions [59.5243730853157]
Federated learning (FL) provides a privacy-preserving solution for fine-tuning pre-trained large language models (LLMs) using distributed private datasets.
This article conducts a comparative analysis of three advanced federated LLM (FedLLM) frameworks that integrate knowledge distillation (KD) and split learning (SL) to mitigate these issues.
arXiv Detail & Related papers (2025-01-08T11:37:06Z) - Tabular Transfer Learning via Prompting LLMs [52.96022335067357]
We propose a novel framework, Prompt to Transfer (P2T), that utilizes unlabeled (or heterogeneous) source data with large language models (LLMs)
P2T identifies a column feature in a source dataset that is strongly correlated with a target task feature to create examples relevant to the target task, thus creating pseudo-demonstrations for prompts.
arXiv Detail & Related papers (2024-08-09T11:30:52Z) - Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - Transparency and Privacy: The Role of Explainable AI and Federated
Learning in Financial Fraud Detection [0.9831489366502302]
This research introduces a novel approach using Federated Learning (FL) and Explainable AI (XAI) to address these challenges.
FL enables financial institutions to collaboratively train a model to detect fraudulent transactions without directly sharing customer data.
XAI ensures that the predictions made by the model can be understood and interpreted by human experts, adding a layer of transparency and trust to the system.
arXiv Detail & Related papers (2023-12-20T18:26:59Z) - Topology-Agnostic Detection of Temporal Money Laundering Flows in
Billion-Scale Transactions [0.03626013617212666]
We propose a framework to efficiently construct a temporal graph of sequential transactions.
We evaluate the scalability and the effectiveness of our framework against two state-of-the-art solutions for detecting suspicious flows of transactions.
arXiv Detail & Related papers (2023-09-24T15:11:58Z) - Scalable and Weakly Supervised Bank Transaction Classification [0.0]
This paper aims to categorize bank transactions using weak supervision, natural language processing, and deep neural network training.
We present an effective and scalable end-to-end data pipeline, including data preprocessing, transaction text embedding, anchoring, label generation, discriminative neural network training.
arXiv Detail & Related papers (2023-05-28T23:12:12Z) - Enabling and Analyzing How to Efficiently Extract Information from
Hybrid Long Documents with LLMs [48.87627426640621]
This research focuses on harnessing the potential of Large Language Models to comprehend critical information from financial reports.
We propose an Automated Financial Information Extraction framework that enhances LLMs' ability to comprehend and extract information from financial reports.
Our framework is effectively validated on GPT-3.5 and GPT-4, yielding average accuracy increases of 53.94% and 33.77%, respectively.
arXiv Detail & Related papers (2023-05-24T10:35:58Z) - OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning [49.38867353135258]
We propose OverPrompt, leveraging the in-context learning capability of LLMs to handle multiple task inputs.
Our experiments show that OverPrompt can achieve cost-efficient zero-shot classification without causing significant detriment to task performance.
arXiv Detail & Related papers (2023-05-24T10:08:04Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Machine Learning in Transaction Monitoring: The Prospect of xAI [0.0]
This study explores how Machine Learning supported automation and augmentation affects the Transaction Monitoring process.
We find that xAI requirements depend on the liable party in the TM process which changes depending on augmentation or automation of TM.
Results suggest a use case-specific approach for xAI to adequately foster the adoption of ML in TM.
arXiv Detail & Related papers (2022-10-14T08:58:35Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.