Neural Network-based Automatic Factor Construction
- URL: http://arxiv.org/abs/2008.06225v3
- Date: Tue, 13 Oct 2020 16:20:18 GMT
- Title: Neural Network-based Automatic Factor Construction
- Authors: Jie Fang, Jianwu Lin, Shutao Xia, Yong Jiang, Zhikang Xia, Xiang Liu
- Abstract summary: This paper proposes Neural Network-based Automatic Factor Construction (NNAFC)
NNAFC can automatically construct diversified financial factors based on financial domain knowledge.
New factors constructed by NNAFC can always improve the return, Sharpe ratio, and the max draw-down of a multi-factor quantitative investment strategy.
- Score: 58.96870869237197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instead of conducting manual factor construction based on traditional and
behavioural finance analysis, academic researchers and quantitative investment
managers have leveraged Genetic Programming (GP) as an automatic feature
construction tool in recent years, which builds reverse polish mathematical
expressions from trading data into new factors. However, with the development
of deep learning, more powerful feature extraction tools are available. This
paper proposes Neural Network-based Automatic Factor Construction (NNAFC), a
tailored neural network framework that can automatically construct diversified
financial factors based on financial domain knowledge and a variety of neural
network structures. The experiment results show that NNAFC can construct more
informative and diversified factors than GP, to effectively enrich the current
factor pool. For the current market, both fully connected and recurrent neural
network structures are better at extracting information from financial time
series than convolution neural network structures. Moreover, new factors
constructed by NNAFC can always improve the return, Sharpe ratio, and the max
draw-down of a multi-factor quantitative investment strategy due to their
introducing more information and diversification to the existing factor pool.
Related papers
- Parallel Proportional Fusion of Spiking Quantum Neural Network for Optimizing Image Classification [10.069224006497162]
We introduce a novel architecture termed Parallel Proportional Fusion of Quantum and Spiking Neural Networks (PPF-QSNN)
The proposed PPF-QSNN outperforms both the existing spiking neural network and the serial quantum neural network across metrics such as accuracy, loss, and robustness.
This study lays the groundwork for the advancement and application of quantum advantage in artificial intelligent computations.
arXiv Detail & Related papers (2024-04-01T10:35:35Z) - Structured Neural Networks for Density Estimation and Causal Inference [15.63518195860946]
Injecting structure into neural networks enables learning functions that satisfy invariances with respect to subsets of inputs.
We propose the Structured Neural Network (StrNN), which injects structure through masking pathways in a neural network.
arXiv Detail & Related papers (2023-11-03T20:15:05Z) - Noncommutative $C^*$-algebra Net: Learning Neural Networks with Powerful Product Structure in $C^*$-algebra [5.359060261460183]
We show that this noncommutative structure induces powerful effects in learning neural networks.
Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning equivariant features.
arXiv Detail & Related papers (2023-01-26T14:35:37Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Random Graph-Based Neuromorphic Learning with a Layer-Weaken Structure [4.477401614534202]
We transform the random graph theory into an NN model with practical meaning and based on clarifying the input-output relationship of each neuron.
Under the usage of this low-operation cost approach, neurons are assigned to several groups of which connection relationships can be regarded as uniform representations of random graphs they belong to.
We develop a joint classification mechanism involving information interaction between multiple RGNNs and realize significant performance improvements in supervised learning for three benchmark tasks.
arXiv Detail & Related papers (2021-11-17T03:37:06Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z) - Alpha Discovery Neural Network based on Prior Knowledge [55.65102700986668]
Genetic programming (GP) is the state-of-the-art in financial automated feature construction task.
This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators.
arXiv Detail & Related papers (2019-12-26T03:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.