Neural Networks and Value at Risk
- URL: http://arxiv.org/abs/2005.01686v2
- Date: Wed, 6 May 2020 10:22:13 GMT
- Title: Neural Networks and Value at Risk
- Authors: Alexander Arimond, Damian Borth, Andreas Hoepner, Michael Klawunn and
Stefan Weisheit
- Abstract summary: We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
- Score: 59.85784504799224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Utilizing a generative regime switching framework, we perform Monte-Carlo
simulations of asset returns for Value at Risk threshold estimation. Using
equity markets and long term bonds as test assets in the global, US, Euro area
and UK setting over an up to 1,250 weeks sample horizon ending in August 2018,
we investigate neural networks along three design steps relating (i) to the
initialization of the neural network, (ii) its incentive function according to
which it has been trained and (iii) the amount of data we feed. First, we
compare neural networks with random seeding with networks that are initialized
via estimations from the best-established model (i.e. the Hidden Markov). We
find latter to outperform in terms of the frequency of VaR breaches (i.e. the
realized return falling short of the estimated VaR threshold). Second, we
balance the incentive structure of the loss function of our networks by adding
a second objective to the training instructions so that the neural networks
optimize for accuracy while also aiming to stay in empirically realistic regime
distributions (i.e. bull vs. bear market frequencies). In particular this
design feature enables the balanced incentive recurrent neural network (RNN) to
outperform the single incentive RNN as well as any other neural network or
established approach by statistically and economically significant levels.
Third, we half our training data set of 2,000 days. We find our networks when
fed with substantially less data (i.e. 1,000 days) to perform significantly
worse which highlights a crucial weakness of neural networks in their
dependence on very large data sets ...
Related papers
- Hard-Label Cryptanalytic Extraction of Neural Network Models [10.568722566232127]
We propose the first attack that theoretically achieves functionally equivalent extraction under the hard-label setting.
The effectiveness of our attack is validated through practical experiments on a wide range of ReLU neural networks.
arXiv Detail & Related papers (2024-09-18T02:17:10Z) - Bayesian Inference Accelerator for Spiking Neural Networks [3.145754107337963]
spiking neural networks (SNNs) have the potential to reduce computational area and power.
In this work, we demonstrate an optimization framework for developing and implementing efficient Bayesian SNNs in hardware.
We demonstrate accuracies comparable to Bayesian binary networks with full-precision Bernoulli parameters, while requiring up to $25times$ less spikes.
arXiv Detail & Related papers (2024-01-27T16:27:19Z) - SynA-ResNet: Spike-driven ResNet Achieved through OR Residual Connection [10.702093960098104]
Spiking Neural Networks (SNNs) have garnered substantial attention in brain-like computing for their biological fidelity and the capacity to execute energy-efficient spike-driven operations.
We propose a novel training paradigm that first accumulates a large amount of redundant information through OR Residual Connection (ORRC)
We then filters out the redundant information using the Synergistic Attention (SynA) module, which promotes feature extraction in the backbone while suppressing the influence of noise and useless features in the shortcuts.
arXiv Detail & Related papers (2023-11-11T13:36:27Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Global quantitative robustness of regression feed-forward neural
networks [0.0]
We adapt the notion of the regression breakdown point to regression neural networks.
We compare the performance, measured by the out-of-sample loss, by a proxy of the breakdown rate.
The results indeed motivate to use robust loss functions for neural network training.
arXiv Detail & Related papers (2022-11-18T09:57:53Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - The Compact Support Neural Network [6.47243430672461]
We present a neuron generalization that has the standard dot-product-based neuron and the RBF neuron as two extreme cases of a shape parameter.
We show how to avoid difficulties in training a neural network with such neurons, by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value.
arXiv Detail & Related papers (2021-04-01T06:08:09Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - ReActNet: Towards Precise Binary Neural Network with Generalized
Activation Functions [76.05981545084738]
We propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts.
We show that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-03-07T02:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.