Gradient-Boosted Based Structured and Unstructured Learning
- URL: http://arxiv.org/abs/2302.14299v1
- Date: Tue, 28 Feb 2023 04:16:42 GMT
- Title: Gradient-Boosted Based Structured and Unstructured Learning
- Authors: Andrea Trevi\~no Gavito, Diego Klabjan, Jean Utke
- Abstract summary: We propose two frameworks to deal with problem settings in which both structured and unstructured data are available.
Our proposed frameworks allow joint learning on both kinds of data by integrating the paradigms of boosting models and deep neural networks.
- Score: 18.76745359031975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose two frameworks to deal with problem settings in which both
structured and unstructured data are available. Structured data problems are
best solved by traditional machine learning models such as boosting and
tree-based algorithms, whereas deep learning has been widely applied to
problems dealing with images, text, audio, and other unstructured data sources.
However, for the setting in which both structured and unstructured data are
accessible, it is not obvious what the best modeling approach is to enhance
performance on both data sources simultaneously. Our proposed frameworks allow
joint learning on both kinds of data by integrating the paradigms of boosting
models and deep neural networks. The first framework, the
boosted-feature-vector deep learning network, learns features from the
structured data using gradient boosting and combines them with embeddings from
unstructured data via a two-branch deep neural network. Secondly, the
two-weak-learner boosting framework extends the boosting paradigm to the
setting with two input data sources. We present and compare first- and
second-order methods of this framework. Our experimental results on both public
and real-world datasets show performance gains achieved by the frameworks over
selected baselines by magnitudes of 0.1% - 4.7%.
Related papers
- Optimizing Federated Graph Learning with Inherent Structural Knowledge and Dual-Densely Connected GNNs [6.185201353691423]
Federated Graph Learning (FGL) enables clients to collaboratively train powerful Graph Neural Networks (GNNs) in a distributed manner without exposing their private data.
Existing methods either overlook the inherent structural knowledge in graph data or capture it at the cost of significantly increased resource demands.
We propose FedDense, a novel FGL framework that optimize the utilization efficiency of inherent structural knowledge.
arXiv Detail & Related papers (2024-08-21T14:37:50Z) - Homological Convolutional Neural Networks [4.615338063719135]
We propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations.
We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models.
arXiv Detail & Related papers (2023-08-26T08:48:51Z) - Bilevel Fast Scene Adaptation for Low-Light Image Enhancement [50.639332885989255]
Enhancing images in low-light scenes is a challenging but widely concerned task in the computer vision.
Main obstacle lies in the modeling conundrum from distribution discrepancy across different scenes.
We introduce the bilevel paradigm to model the above latent correspondence.
A bilevel learning framework is constructed to endow the scene-irrelevant generality of the encoder towards diverse scenes.
arXiv Detail & Related papers (2023-06-02T08:16:21Z) - Principled and Efficient Motif Finding for Structure Learning of Lifted
Graphical Models [5.317624228510748]
Structure learning is a core problem in AI central to the fields of neuro-symbolic AI and statistical relational learning.
We present the first principled approach for mining structural motifs in lifted graphical models.
We show that we outperform state-of-the-art structure learning approaches by up to 6% in terms of accuracy and up to 80% in terms of runtime.
arXiv Detail & Related papers (2023-02-09T12:21:55Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Texture Aware Autoencoder Pre-training And Pairwise Learning Refinement
For Improved Iris Recognition [16.383084641568693]
This paper presents an end-to-end trainable iris recognition system for datasets with limited training data.
We build upon our previous stagewise learning framework with certain key optimization and architectural innovations.
We validate our model across three publicly available iris datasets and the proposed model consistently outperforms both traditional and deep learning baselines.
arXiv Detail & Related papers (2022-02-15T15:12:31Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - AdaXpert: Adapting Neural Architecture for Growing Data [63.30393509048505]
In real-world applications, data often come in a growing manner, where the data volume and the number of classes may increase dynamically.
Given the increasing data volume or the number of classes, one has to instantaneously adjust the neural model capacity to obtain promising performance.
Existing methods either ignore the growing nature of data or seek to independently search an optimal architecture for a given dataset.
arXiv Detail & Related papers (2021-07-01T07:22:05Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z) - Improving Learning Effectiveness For Object Detection and Classification
in Cluttered Backgrounds [6.729108277517129]
This paper develops a framework that permits to autonomously generate a training dataset in heterogeneous cluttered backgrounds.
It is clear that the learning effectiveness of the proposed framework should be improved in complex and heterogeneous environments.
The performance of the proposed framework is investigated through empirical tests and compared with that of the model trained with the COCO dataset.
arXiv Detail & Related papers (2020-02-27T22:28:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.