Competition analysis on the over-the-counter credit default swap market
- URL: http://arxiv.org/abs/2012.01883v1
- Date: Thu, 3 Dec 2020 13:02:53 GMT
- Title: Competition analysis on the over-the-counter credit default swap market
- Authors: Louis Abraham
- Abstract summary: We study the competition between central counterparties through collateral requirements.
We present models that successfully estimate the initial margin requirements.
Second, we model counterpart choice on the interdealer market using a novel semi-supervised predictive task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study two questions related to competition on the OTC CDS market using
data collected as part of the EMIR regulation.
First, we study the competition between central counterparties through
collateral requirements. We present models that successfully estimate the
initial margin requirements. However, our estimations are not precise enough to
use them as input to a predictive model for CCP choice by counterparties in the
OTC market.
Second, we model counterpart choice on the interdealer market using a novel
semi-supervised predictive task. We present our methodology as part of the
literature on model interpretability before arguing for the use of conditional
entropy as the metric of interest to derive knowledge from data through a
model-agnostic approach. In particular, we justify the use of deep neural
networks to measure conditional entropy on real-world datasets. We create the
$\textit{Razor entropy}$ using the framework of algorithmic information theory
and derive an explicit formula that is identical to our semi-supervised
training objective. Finally, we borrow concepts from game theory to define
$\textit{top-k Shapley values}$. This novel method of payoff distribution
satisfies most of the properties of Shapley values, and is of particular
interest when the value function is monotone submodular. Unlike classical
Shapley values, top-k Shapley values can be computed in quadratic time of the
number of features instead of exponential. We implement our methodology and
report the results on our particular task of counterpart choice.
Finally, we present an improvement to the $\textit{node2vec}$ algorithm that
could for example be used to further study intermediation. We show that the
neighbor sampling used in the generation of biased walks can be performed in
logarithmic time with a quasilinear time pre-computation, unlike the current
implementations that do not scale well.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Improved Convergence of Score-Based Diffusion Models via Prediction-Correction [15.772322871598085]
Score-based generative models (SGMs) are powerful tools to sample from complex data distributions.
This paper addresses the issue by considering a version of the popular predictor-corrector scheme.
We first estimate the final distribution via an inexact Langevin dynamics and then revert the process.
arXiv Detail & Related papers (2023-05-23T15:29:09Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Latent Time Neural Ordinary Differential Equations [0.2538209532048866]
We propose a novel approach to model uncertainty in NODE by considering a distribution over the end-time $T$ of the ODE solver.
We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times.
We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.
arXiv Detail & Related papers (2021-12-23T17:31:47Z) - Improved Prediction and Network Estimation Using the Monotone Single
Index Multi-variate Autoregressive Model [34.529641317832024]
We develop a semi-parametric approach based on the monotone single-index multi-variate autoregressive model (SIMAM)
We provide theoretical guarantees for dependent data and an alternating projected gradient descent algorithm.
We demonstrate the superior performance both on simulated data and two real data examples.
arXiv Detail & Related papers (2021-06-28T12:32:29Z) - Markdowns in E-Commerce Fresh Retail: A Counterfactual Prediction and
Multi-Period Optimization Approach [29.11201102550876]
We build a semi-parametric structural model to learn individual price elasticity and predict counterfactual demand.
We propose a multi-period dynamic pricing algorithm to maximize the overall profit of a perishable product over its finite selling horizon.
The proposed framework has been successfully deployed to the well-known e-commerce fresh retail scenario - Freshippo.
arXiv Detail & Related papers (2021-05-18T07:01:37Z) - A bandit-learning approach to multifidelity approximation [7.960229223744695]
Multifidelity approximation is an important technique in scientific computation and simulation.
We introduce a bandit-learning approach for leveraging data of varying fidelities to achieve precise estimates.
arXiv Detail & Related papers (2021-03-29T05:29:35Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Nonparametric Estimation in the Dynamic Bradley-Terry Model [69.70604365861121]
We develop a novel estimator that relies on kernel smoothing to pre-process the pairwise comparisons over time.
We derive time-varying oracle bounds for both the estimation error and the excess risk in the model-agnostic setting.
arXiv Detail & Related papers (2020-02-28T21:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.