DeConFuse : A Deep Convolutional Transform based Unsupervised Fusion
Framework
- URL: http://arxiv.org/abs/2011.04337v1
- Date: Mon, 9 Nov 2020 11:04:09 GMT
- Title: DeConFuse : A Deep Convolutional Transform based Unsupervised Fusion
Framework
- Authors: Pooja Gupta, Jyoti Maggu, Angshul Majumdar, Emilie Chouzenoux,
Giovanni Chierchia
- Abstract summary: This work proposes an unsupervised fusion framework based on deep convolutional transform learning.
We apply the proposed technique, named DeConFuse, on the problem of stock forecasting and trading.
- Score: 29.58965424136611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes an unsupervised fusion framework based on deep
convolutional transform learning. The great learning ability of convolutional
filters for data analysis is well acknowledged. The success of convolutive
features owes to convolutional neural network (CNN). However, CNN cannot
perform learning tasks in an unsupervised fashion. In a recent work, we show
that such shortcoming can be addressed by adopting a convolutional transform
learning (CTL) approach, where convolutional filters are learnt in an
unsupervised fashion. The present paper aims at (i) proposing a deep version of
CTL; (ii) proposing an unsupervised fusion formulation taking advantage of the
proposed deep CTL representation; (iii) developing a mathematically sounded
optimization strategy for performing the learning task. We apply the proposed
technique, named DeConFuse, on the problem of stock forecasting and trading.
Comparison with state-of-the-art methods (based on CNN and long short-term
memory network) shows the superiority of our method for performing a reliable
feature extraction.
Related papers
- Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - The Power of Linear Combinations: Learning with Random Convolutions [2.0305676256390934]
Modern CNNs can achieve high test accuracies without ever updating randomly (spatial) convolution filters.
These combinations of random filters can implicitly regularize the resulting operations.
Although we only observe relatively small gains from learning $3times 3$ convolutions, the learning gains increase proportionally with kernel size.
arXiv Detail & Related papers (2023-01-26T19:17:10Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Vibration-Based Condition Monitoring By Ensemble Deep Learning [0.0]
This study proposes a framework based on ensemble deep learning methodology.
The proposed framework is applied to real test data collected from Equiax Polycrystalline Nickel alloy first-stage turbine blades.
arXiv Detail & Related papers (2021-10-13T09:51:40Z) - FILTRA: Rethinking Steerable CNN by Filter Transform [59.412570807426135]
The problem of steerable CNN has been studied from aspect of group representation theory.
We show that kernel constructed by filter transform can also be interpreted in the group representation theory.
This interpretation help complete the puzzle of steerable CNN theory and provides a novel and simple approach to implement steerable convolution operators.
arXiv Detail & Related papers (2021-05-25T03:32:34Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Deep Convolutional Transform Learning -- Extended version [31.034188573071898]
This work introduces a new unsupervised representation learning technique called Deep Convolutional Transform Learning (DCTL)
By stacking convolutional transforms, our approach is able to learn a set of independent kernels at different layers.
The features extracted in an unsupervised manner can then be used to perform machine learning tasks, such as classification and clustering.
arXiv Detail & Related papers (2020-10-02T14:03:19Z) - Deep-3DAligner: Unsupervised 3D Point Set Registration Network With
Optimizable Latent Vector [15.900382629390297]
We propose to develop a novel model that integrates the optimization to learning, aiming to address the technical challenges in 3D registration.
In addition to the deep transformation decoding network, our framework introduce an optimizable deep underlineSpatial underlineCorrelation underlineRepresentation.
arXiv Detail & Related papers (2020-09-29T22:44:38Z) - Sparse aNETT for Solving Inverse Problems with Deep Learning [2.5234156040689237]
We propose a sparse reconstruction framework (aNETT) for solving inverse problems.
We train an autoencoder network $D circ E$ with $E$ acting as a nonlinear sparsifying transform.
Numerical results are presented for sparse view CT.
arXiv Detail & Related papers (2020-04-20T18:43:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.