XBNet : An Extremely Boosted Neural Network
- URL: http://arxiv.org/abs/2106.05239v1
- Date: Wed, 9 Jun 2021 17:31:50 GMT
- Title: XBNet : An Extremely Boosted Neural Network
- Authors: Tushar Sarkar
- Abstract summary: XBNet tries to combine tree-based models with that of neural networks to create a robust architecture trained by using a novel optimization technique, Boosted Gradient Descent for Tabular Data.
In this paper, we describe a novel architecture XBNet, which tries to combine tree-based models with that of neural networks to create a robust architecture trained by using a novel optimization technique, Boosted Gradient Descent for Tabular Data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural networks have proved to be very robust at processing unstructured data
like images, text, videos, and audio. However, it has been observed that their
performance is not up to the mark in tabular data; hence tree-based models are
preferred in such scenarios. A popular model for tabular data is boosted trees,
a highly efficacious and extensively used machine learning method, and it also
provides good interpretability compared to neural networks. In this paper, we
describe a novel architecture XBNet, which tries to combine tree-based models
with that of neural networks to create a robust architecture trained by using a
novel optimization technique, Boosted Gradient Descent for Tabular Data which
increases its interpretability and performance.
Related papers
- Escaping the Forest: Sparse Interpretable Neural Networks for Tabular Data [0.0]
We show that our models, Sparse TABular NET or sTAB-Net with attention mechanisms, are more effective than tree-based models.
They achieve better performance than post-hoc methods like SHAP.
arXiv Detail & Related papers (2024-10-23T10:50:07Z) - Interpretable Graph Neural Networks for Tabular Data [18.30325076881234]
IGNNet constrains the learning algorithm to produce an interpretable model.
A large-scale empirical investigation is presented, showing that IGNNet is performing on par with state-of-the-art machine-learning algorithms.
arXiv Detail & Related papers (2023-08-17T12:35:02Z) - NCART: Neural Classification and Regression Tree for Tabular Data [0.5439020425819]
NCART is a modified version of Residual Networks that replaces fully-connected layers with multiple differentiable oblivious decision trees.
It maintains its interpretability while benefiting from the end-to-end capabilities of neural networks.
The simplicity of the NCART architecture makes it well-suited for datasets of varying sizes.
arXiv Detail & Related papers (2023-07-23T01:27:26Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Interpretable Mesomorphic Networks for Tabular Data [25.76214343259399]
We propose a new class of interpretable neural networks that are both deep and linear at the same time.
We optimize deep hypernetworks to generate explainable linear models on a per-instance basis.
arXiv Detail & Related papers (2023-05-22T14:41:17Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Creating Powerful and Interpretable Models withRegression Networks [2.2049183478692584]
We propose a novel architecture, Regression Networks, which combines the power of neural networks with the understandability of regression analysis.
We demonstrate that the models exceed the state-of-the-art performance of interpretable models on several benchmark datasets.
arXiv Detail & Related papers (2021-07-30T03:37:00Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.