Coevolutionary Algorithm for Building Robust Decision Trees under
Minimax Regret
- URL: http://arxiv.org/abs/2312.09078v1
- Date: Thu, 14 Dec 2023 16:12:22 GMT
- Title: Coevolutionary Algorithm for Building Robust Decision Trees under
Minimax Regret
- Authors: Adam \.Zychowski, Andrew Perrault, Jacek Ma\'ndziuk
- Abstract summary: This paper proposes a novel coevolutionary algorithm (CoEvoRDT) designed to create robust algorithms-decision trees (DTs)
Motivated by the limitations of traditional DT algorithms, we leverage adaptive coevolution to allow DTs to evolve and learn from interactions with perturbed input data.
CoEvoRDT is tested on 20 popular datasets and shows superior performance compared to 4 state-of-the-art algorithms.
- Score: 12.72963164625931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been growing interest in developing robust machine
learning (ML) models that can withstand adversarial attacks, including one of
the most widely adopted, efficient, and interpretable ML algorithms-decision
trees (DTs). This paper proposes a novel coevolutionary algorithm (CoEvoRDT)
designed to create robust DTs capable of handling noisy high-dimensional data
in adversarial contexts. Motivated by the limitations of traditional DT
algorithms, we leverage adaptive coevolution to allow DTs to evolve and learn
from interactions with perturbed input data. CoEvoRDT alternately evolves
competing populations of DTs and perturbed features, enabling construction of
DTs with desired properties. CoEvoRDT is easily adaptable to various target
metrics, allowing the use of tailored robustness criteria such as minimax
regret. Furthermore, CoEvoRDT has potential to improve the results of other
state-of-the-art methods by incorporating their outcomes (DTs they produce)
into the initial population and optimize them in the process of coevolution.
Inspired by the game theory, CoEvoRDT utilizes mixed Nash equilibrium to
enhance convergence. The method is tested on 20 popular datasets and shows
superior performance compared to 4 state-of-the-art algorithms. It outperformed
all competing methods on 13 datasets with adversarial accuracy metrics, and on
all 20 considered datasets with minimax regret. Strong experimental results and
flexibility in choosing the error measure make CoEvoRDT a promising approach
for constructing robust DTs in real-world applications.
Related papers
- RGMDT: Return-Gap-Minimizing Decision Tree Extraction in Non-Euclidean Metric Space [28.273737052758907]
We introduce an upper bound on the return gap between the oracle expert policy and an optimal decision tree policy.
This enables us to recast the DT extraction problem into a novel non-euclidean clustering problem over the local observation and action values space of each agent.
We also propose the Return-Gap-Minimization Decision Tree (RGMDT) algorithm, which is a surprisingly simple design and is integrated with reinforcement learning.
arXiv Detail & Related papers (2024-10-21T21:19:49Z) - Making Large Language Models Better Planners with Reasoning-Decision Alignment [70.5381163219608]
We motivate an end-to-end decision-making model based on multimodality-augmented LLM.
We propose a reasoning-decision alignment constraint between the paired CoTs and planning results.
We dub our proposed large language planners with reasoning-decision alignment as RDA-Driver.
arXiv Detail & Related papers (2024-08-25T16:43:47Z) - Team up GBDTs and DNNs: Advancing Efficient and Effective Tabular Prediction with Tree-hybrid MLPs [20.67800392863432]
Tabular datasets play a crucial role in various applications.
Two prominent model types, Boosted Decision Trees (GBDTs) and Deep Neural Networks (DNNs), have demonstrated performance advantages on distinct prediction tasks.
This paper proposes a new framework that amalgamates the advantages of both GBDTs and DNNs, resulting in a DNN algorithm that is as efficient as GBDTs and is competitively effective regardless of dataset preferences.
arXiv Detail & Related papers (2024-07-13T07:13:32Z) - Heterogeneous Learning Rate Scheduling for Neural Architecture Search on Long-Tailed Datasets [0.0]
We propose a novel adaptive learning rate scheduling strategy tailored for the architecture parameters of DARTS.
Our approach dynamically adjusts the learning rate of the architecture parameters based on the training epoch, preventing the disruption of well-trained representations.
arXiv Detail & Related papers (2024-06-11T07:32:25Z) - Generating Minimalist Adversarial Perturbations to Test Object-Detection Models: An Adaptive Multi-Metric Evolutionary Search Approach [1.7396341474676855]
TM-EVO is an efficient algorithm for evaluating the robustness of object-detection DL models against adversarial attacks.
We evaluate TM-EVO on widely-used object-detection DL models, DETR and Faster R-CNN, and open-source datasets, COCO and KITTI.
arXiv Detail & Related papers (2024-04-25T20:25:40Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Maximum Mutation Reinforcement Learning for Scalable Control [25.935468948833073]
Reinforcement Learning (RL) has demonstrated data efficiency and optimal control over large state spaces at the cost of scalable performance.
We present the Evolution-based Soft Actor-Critic (ESAC), a scalable RL algorithm.
arXiv Detail & Related papers (2020-07-24T16:29:19Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.