Crossbreeding in Random Forest
- URL: http://arxiv.org/abs/2101.08585v1
- Date: Thu, 21 Jan 2021 12:58:54 GMT
- Title: Crossbreeding in Random Forest
- Authors: Abolfazl Nadi, Hadi Moradi, Khalil Taheri
- Abstract summary: Ensemble learning methods are designed to benefit from multiple learning algorithms for better predictive performance.
The tradeoff of this improved performance is slower speed and larger size of ensemble learning systems compared to single learning systems.
We present a novel approach to deal with this problem in Random Forest (RF) as one of the most powerful ensemble methods.
- Score: 5.8010446129208155
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ensemble learning methods are designed to benefit from multiple learning
algorithms for better predictive performance. The tradeoff of this improved
performance is slower speed and larger size of ensemble learning systems
compared to single learning systems. In this paper, we present a novel approach
to deal with this problem in Random Forest (RF) as one of the most powerful
ensemble methods. The method is based on crossbreeding of the best tree
branches to increase the performance of RF in space and speed while keeping the
performance in the classification measures. The proposed approach has been
tested on a group of synthetic and real datasets and compared to the standard
RF approach. Several evaluations have been conducted to determine the effects
of the Crossbred RF (CRF) on the accuracy and the number of trees in a forest.
The results show better performance of CRF compared to RF.
Related papers
- Heterogeneous Random Forest [2.0646127669654835]
Heterogeneous Random Forest (HRF) is designed to enhance tree diversity in a meaningful way.
HRF consistently outperformed other ensemble methods in terms of accuracy across the majority of datasets.
arXiv Detail & Related papers (2024-10-24T09:18:55Z) - LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering [106.0057551634008]
We introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing smoothing regularized NeRF (SANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot NeRF methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - Enhancing Fast Feed Forward Networks with Load Balancing and a Master Leaf Node [49.08777822540483]
Fast feedforward networks (FFFs) exploit the observation that different regions of the input space activate distinct subsets of neurons in wide networks.
We propose the incorporation of load balancing and Master Leaf techniques into the FFF architecture to improve performance and simplify the training process.
arXiv Detail & Related papers (2024-05-27T05:06:24Z) - Forest-ORE: Mining Optimal Rule Ensemble to interpret Random Forest models [0.0]
We present Forest-ORE, a method that makes Random Forest (RF) interpretable via an optimized rule ensemble (ORE) for local and global interpretation.
A comparative analysis of well-known methods shows that Forest-ORE provides an excellent trade-off between predictive performance, interpretability coverage, and model size.
arXiv Detail & Related papers (2024-03-26T10:54:07Z) - Data-driven multinomial random forest [2.1828601975620257]
We propose a data-driven multinomial random forest (DMRF) algorithm, which has lower complexity than MRF and higher complexity than BRF.
To the best of our knowledge, DMRF is currently the most excellent strongly consistent RF variant with low algorithm complexity.
arXiv Detail & Related papers (2023-04-09T14:04:56Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Residual Likelihood Forests [19.97069303172077]
This paper presents a novel ensemble learning approach called Residual Likelihood Forests (RLF)
Our weak learners produce conditional likelihoods that are sequentially optimized using global loss in the context of previous learners.
When compared against several ensemble approaches including Random Forests and Gradient Boosted Trees, RLFs offer a significant improvement in performance.
arXiv Detail & Related papers (2020-11-04T00:59:41Z) - AIN: Fast and Accurate Sequence Labeling with Approximate Inference
Network [75.44925576268052]
The linear-chain Conditional Random Field (CRF) model is one of the most widely-used neural sequence labeling approaches.
Exact probabilistic inference algorithms are typically applied in training and prediction stages of the CRF model.
We propose to employ a parallelizable approximate variational inference algorithm for the CRF model.
arXiv Detail & Related papers (2020-09-17T12:18:43Z) - Random Partitioning Forest for Point-Wise and Collective Anomaly
Detection -- Application to Intrusion Detection [9.74672460306765]
DiFF-RF is an ensemble approach composed of random partitioning binary trees to detect anomalies.
Our experiments show that DiFF-RF almost systematically outperforms the isolation forest (IF) algorithm.
Our experience shows that DiFF-RF can work well in the presence of small-scale learning data.
arXiv Detail & Related papers (2020-06-29T10:44:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.