Towards a Foundation Purchasing Model: Pretrained Generative
Autoregression on Transaction Sequences
- URL: http://arxiv.org/abs/2401.01641v2
- Date: Thu, 4 Jan 2024 16:52:11 GMT
- Title: Towards a Foundation Purchasing Model: Pretrained Generative
Autoregression on Transaction Sequences
- Authors: Piotr Skalski, David Sutton, Stuart Burrell, Iker Perez, Jason Wong
- Abstract summary: We present a generative pretraining method that can be used to obtain contextualised embeddings of financial transactions.
We additionally perform large-scale pretraining of an embedding model using a corpus of data from 180 issuing banks containing 5.1 billion transactions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning models underpin many modern financial systems for use cases
such as fraud detection and churn prediction. Most are based on supervised
learning with hand-engineered features, which relies heavily on the
availability of labelled data. Large self-supervised generative models have
shown tremendous success in natural language processing and computer vision,
yet so far they haven't been adapted to multivariate time series of financial
transactions. In this paper, we present a generative pretraining method that
can be used to obtain contextualised embeddings of financial transactions.
Benchmarks on public datasets demonstrate that it outperforms state-of-the-art
self-supervised methods on a range of downstream tasks. We additionally perform
large-scale pretraining of an embedding model using a corpus of data from 180
issuing banks containing 5.1 billion transactions and apply it to the card
fraud detection problem on hold-out datasets. The embedding model significantly
improves value detection rate at high precision thresholds and transfers well
to out-of-domain distributions.
Related papers
- Credit Card Fraud Detection Using Advanced Transformer Model [15.34892016767672]
This study focuses on innovative applications of the latest Transformer models for more robust and precise fraud detection.
We meticulously processed the data sources, balancing the dataset to address the issue of data sparsity significantly.
We conducted performance comparisons with several widely adopted models, including Support Vector Machine (SVM), Random Forest, Neural Network, and Logistic Regression.
arXiv Detail & Related papers (2024-06-06T04:12:57Z) - Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Graph Feature Preprocessor: Real-time Subgraph-based Feature Extraction for Financial Crime Detection [2.1140460101878107]
"Graph Feature Preprocessor" is a software library for detecting typical money laundering patterns in financial transaction graphs in real time.
We show that our enriched transaction features dramatically improve the prediction accuracy of gradient-boosting-based machine learning models.
Our solution can detect illicit transactions with higher minority-class F1 scores than standard graph neural networks in anti-money laundering and phishing datasets.
arXiv Detail & Related papers (2024-02-13T16:53:48Z) - Generative AI for End-to-End Limit Order Book Modelling: A Token-Level
Autoregressive Generative Model of Message Flow Using a Deep State Space
Network [7.54290390842336]
We propose an end-to-end autoregressive generative model that generates tokenized limit order book (LOB) messages.
Using NASDAQ equity LOBs, we develop a custom tokenizer for message data, converting groups of successive digits to tokens.
Results show promising performance in approximating the data distribution, as evidenced by low model perplexity.
arXiv Detail & Related papers (2023-08-23T09:37:22Z) - Two-stage Modeling for Prediction with Confidence [0.0]
It is difficult to generalize the performance of neural networks under the condition of distributional shift.
We propose a novel two-stage model for the potential distribution shift problem.
We show that our model offers reliable predictions for the vast majority of datasets.
arXiv Detail & Related papers (2022-09-19T08:48:07Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Evaluating data augmentation for financial time series classification [85.38479579398525]
We evaluate several augmentation methods applied to stocks datasets using two state-of-the-art deep learning models.
For a relatively small dataset augmentation methods achieve up to $400%$ improvement in risk adjusted return performance.
For a larger stock dataset augmentation methods achieve up to $40%$ improvement.
arXiv Detail & Related papers (2020-10-28T17:53:57Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.