Alpha Discovery Neural Network based on Prior Knowledge
- URL: http://arxiv.org/abs/1912.11761v8
- Date: Thu, 26 Nov 2020 06:22:03 GMT
- Title: Alpha Discovery Neural Network based on Prior Knowledge
- Authors: Jie Fang, Shutao Xia, Jianwu Lin, Zhikang Xia, Xiang Liu, and Yong
Jiang
- Abstract summary: Genetic programming (GP) is the state-of-the-art in financial automated feature construction task.
This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators.
- Score: 55.65102700986668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Genetic programming (GP) is the state-of-the-art in financial automated
feature construction task. It employs reverse polish expression to represent
features and then conducts the evolution process. However, with the development
of deep learning, more powerful feature extraction tools are available. This
paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network
structure which can automatically construct diversified financial technical
indicators based on prior knowledge. We mainly made three contributions. First,
we use domain knowledge in quantitative trading to design the sampling rules
and object function. Second, pre-training and model pruning has been used to
replace genetic programming, because it can conduct more efficient evolution
process. Third, the feature extractors in ADNN can be replaced by different
feature extractors and produce different functions. The experiment results show
that ADNN can construct more informative and diversified features than GP,
which can effectively enriches the current factor pool. The fully-connected
network and recurrent network are better at extracting information from the
financial time series than the convolution neural network. In real practice,
features constructed by ADNN can always improve multi-factor strategies'
revenue, sharpe ratio, and max draw-down, compared with the investment
strategies without these factors.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks [17.12834153477201]
We propose a novel resource-efficient supervised feature selection method using sparse neural networks.
By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently.
NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models.
arXiv Detail & Related papers (2023-03-10T17:09:55Z) - Evolution of Activation Functions for Deep Learning-Based Image
Classification [0.0]
Activation functions (AFs) play a pivotal role in the performance of neural networks.
We propose a novel, three-population, coevolutionary algorithm to evolve AFs.
Tested on four datasets -- MNIST, FashionMNIST, KMNIST, and USPS -- coevolution proves to be a performant algorithm for finding good AFs and AF architectures.
arXiv Detail & Related papers (2022-06-24T05:58:23Z) - OGGN: A Novel Generalized Oracle Guided Generative Architecture for
Modelling Inverse Function of Artificial Neural Networks [0.6091702876917279]
This paper presents a novel Generative Neural Network Architecture for modelling the inverse function of an Artificial Neural Network (ANN) either completely or partially.
The proposed Oracle Guided Generative Neural Network, dubbed as OGGN, is flexible to handle a variety of feature generation problems.
The constraint functions enable a neural network to investigate a given local space for a longer period of time.
arXiv Detail & Related papers (2021-04-08T17:28:52Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Neural Network-based Automatic Factor Construction [58.96870869237197]
This paper proposes Neural Network-based Automatic Factor Construction (NNAFC)
NNAFC can automatically construct diversified financial factors based on financial domain knowledge.
New factors constructed by NNAFC can always improve the return, Sharpe ratio, and the max draw-down of a multi-factor quantitative investment strategy.
arXiv Detail & Related papers (2020-08-14T07:44:49Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.