KANFormer for Predicting Fill Probabilities via Survival Analysis in Limit Order Books
- URL: http://arxiv.org/abs/2512.05734v1
- Date: Fri, 05 Dec 2025 14:15:02 GMT
- Title: KANFormer for Predicting Fill Probabilities via Survival Analysis in Limit Order Books
- Authors: Jinfeng Zhong, Emmanuel Bacry, Agathe Guilloux, Jean-François Muzy,
- Abstract summary: KANFormer is a novel model for predicting the time-to-fill of limit orders.<n>It combines a Dilated Causal Convolutional network with a Transformer encoder, enhanced by Kolmogorov-Arnold Networks (KANs)<n>We evaluate the model using CAC 40 index futures data with labeled orders.
- Score: 5.144809478361604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces KANFormer, a novel deep-learning-based model for predicting the time-to-fill of limit orders by leveraging both market- and agent-level information. KANFormer combines a Dilated Causal Convolutional network with a Transformer encoder, enhanced by Kolmogorov-Arnold Networks (KANs), which improve nonlinear approximation. Unlike existing models that rely solely on a series of snapshots of the limit order book, KANFormer integrates the actions of agents related to LOB dynamics and the position of the order in the queue to more effectively capture patterns related to execution likelihood. We evaluate the model using CAC 40 index futures data with labeled orders. The results show that KANFormer outperforms existing works in both calibration (Right-Censored Log-Likelihood, Integrated Brier Score) and discrimination (C-index, time-dependent AUC). We further analyze feature importance over time using SHAP (SHapley Additive exPlanations). Our results highlight the benefits of combining rich market signals with expressive neural architectures to achieve accurate and interpretabl predictions of fill probabilities.
Related papers
- Reinforced Context Order Recovery for Adaptive Reasoning and Planning [23.229513376337607]
Current causal and diffusion models encounter difficulties in problems that require adaptive token generation orders to solve tractably.<n>Motivated by this, we propose Reinforced Context Order Recovery (ReCOR), a reinforcement-learning-based framework to extract adaptive, data-dependent token generation orders.
arXiv Detail & Related papers (2025-08-18T16:42:55Z) - Score-informed Neural Operator for Enhancing Ordering-based Causal Discovery [12.33811209316863]
We propose Score-informed Neural Operator (SciNO) to approximate the Hessian diagonal of the log-densities.<n>SciNO reduces order divergence by 42.7% on synthetic graphs and by 31.5% on real-world datasets.<n>We also propose a probabilistic control algorithm for causal reasoning with autoregressive models.
arXiv Detail & Related papers (2025-08-18T06:25:41Z) - COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - Deep Autoregressive Models as Causal Inference Engines [38.26602521505842]
We propose an autoregressive (AR) causal inference framework capable of handling complex confounders and sequential actions.<n>Our approach accomplishes this using em sequencification, which transforms data from an underlying causal diagram into a sequence of tokens.<n>We demonstrate that an AR model adapted for CI is efficient and effective in various complex applications such as navigating mazes, playing chess endgames, and evaluating the impact of certain keywords on paper acceptance rates.
arXiv Detail & Related papers (2024-09-27T09:37:09Z) - Scoreformer: A Surrogate Model For Large-Scale Prediction of Docking Scores [0.0]
We present ScoreFormer, a novel graph transformer model designed to accurately predict molecular docking scores.
ScoreFormer achieves competitive performance in docking score prediction and offers a substantial 1.65-fold reduction in inference time compared to existing models.
arXiv Detail & Related papers (2024-06-13T17:31:02Z) - Non-autoregressive Sequence-to-Sequence Vision-Language Models [59.445765313094434]
We propose a parallel decoding sequence-to-sequence vision-language model that marginalizes over multiple inference paths in the decoder.<n>The model achieves performance on-par with its state-of-the-art autoregressive counterpart, but is faster at inference time.
arXiv Detail & Related papers (2024-03-04T17:34:59Z) - Enhancing Few-shot NER with Prompt Ordering based Data Augmentation [59.69108119752584]
We propose a Prompt Ordering based Data Augmentation (PODA) method to improve the training of unified autoregressive generation frameworks.
Experimental results on three public NER datasets and further analyses demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-05-19T16:25:43Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.