Enhancing Tabular Data Optimization with a Flexible Graph-based Reinforced Exploration Strategy
- URL: http://arxiv.org/abs/2406.07404v1
- Date: Tue, 11 Jun 2024 16:10:37 GMT
- Title: Enhancing Tabular Data Optimization with a Flexible Graph-based Reinforced Exploration Strategy
- Authors: Xiaohan Huang, Dongjie Wang, Zhiyuan Ning, Ziyue Qiao, Qingqing Long, Haowei Zhu, Min Wu, Yuanchun Zhou, Meng Xiao,
- Abstract summary: Current frameworks for automated feature transformation rely on iterative sequence generation tasks.
Three cascading agents iteratively select nodes and idea mathematical operations to generate new transformation states.
This strategy leverages the inherent properties of the graph structure, allowing for the preservation and reuse of valuable transformations.
- Score: 16.782884097690882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tabular data optimization methods aim to automatically find an optimal feature transformation process that generates high-value features and improves the performance of downstream machine learning tasks. Current frameworks for automated feature transformation rely on iterative sequence generation tasks, optimizing decision strategies through performance feedback from downstream tasks. However, these approaches fail to effectively utilize historical decision-making experiences and overlook potential relationships among generated features, thus limiting the depth of knowledge extraction. Moreover, the granularity of the decision-making process lacks dynamic backtracking capabilities for individual features, leading to insufficient adaptability when encountering inefficient pathways, adversely affecting overall robustness and exploration efficiency. To address the limitations observed in current automatic feature engineering frameworks, we introduce a novel method that utilizes a feature-state transformation graph to effectively preserve the entire feature transformation journey, where each node represents a specific transformation state. During exploration, three cascading agents iteratively select nodes and idea mathematical operations to generate new transformation states. This strategy leverages the inherent properties of the graph structure, allowing for the preservation and reuse of valuable transformations. It also enables backtracking capabilities through graph pruning techniques, which can rectify inefficient transformation paths. To validate the efficacy and flexibility of our approach, we conducted comprehensive experiments and detailed case studies, demonstrating superior performance in diverse scenarios.
Related papers
- Q-value Regularized Transformer for Offline Reinforcement Learning [70.13643741130899]
We propose a Q-value regularized Transformer (QT) to enhance the state-of-the-art in offline reinforcement learning (RL)
QT learns an action-value function and integrates a term maximizing action-values into the training loss of Conditional Sequence Modeling (CSM)
Empirical evaluations on D4RL benchmark datasets demonstrate the superiority of QT over traditional DP and CSM methods.
arXiv Detail & Related papers (2024-05-27T12:12:39Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Feature Interaction Aware Automated Data Representation Transformation [27.26916497306978]
We develop a hierarchical reinforcement learning structure with cascading Markov Decision Processes to automate feature and operation selection.
We reward agents based on the interaction strength between selected features, resulting in intelligent and efficient exploration of the feature space that emulates human decision-making.
arXiv Detail & Related papers (2023-09-29T06:48:16Z) - Leveraging the Power of Data Augmentation for Transformer-based Tracking [64.46371987827312]
We propose two data augmentation methods customized for tracking.
First, we optimize existing random cropping via a dynamic search radius mechanism and simulation for boundary samples.
Second, we propose a token-level feature mixing augmentation strategy, which enables the model against challenges like background interference.
arXiv Detail & Related papers (2023-09-15T09:18:54Z) - Traceable Group-Wise Self-Optimizing Feature Transformation Learning: A
Dual Optimization Perspective [33.45878576396101]
Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features.
Existing research predominantly focuses on domain knowledge-based feature engineering or learning latent representations.
Our initial work took a pioneering step towards this challenge by introducing a novel self-optimizing framework.
arXiv Detail & Related papers (2023-06-29T12:29:21Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Group-wise Reinforcement Feature Generation for Optimal and Explainable
Representation Space Reconstruction [25.604176830832586]
We reformulate representation space reconstruction into an interactive process of nested feature generation and selection.
We design a group-wise generation strategy to cross a feature group, an operation, and another feature group to generate new features.
We present extensive experiments to demonstrate the effectiveness, efficiency, traceability, and explicitness of our system.
arXiv Detail & Related papers (2022-05-28T21:34:14Z) - Resource-Efficient Invariant Networks: Exponential Gains by Unrolled
Optimization [8.37077056358265]
We propose a new computational primitive for building invariant networks based instead on optimization.
We provide empirical and theoretical corroboration of the efficiency gains and soundness of our proposed method.
We demonstrate its utility in constructing an efficient invariant network for a simple hierarchical object detection task.
arXiv Detail & Related papers (2022-03-09T19:04:08Z) - Phase Transition Adaptation [14.034816857287044]
We propose an extension of the original approach, a local unsupervised learning mechanism we call Phase Transition Adaptation.
We show experimentally that our approach consistently achieves its purpose over several datasets.
arXiv Detail & Related papers (2021-04-20T17:18:34Z) - Efficient Continual Adaptation for Generative Adversarial Networks [97.20244383723853]
We present a continual learning approach for generative adversarial networks (GANs)
Our approach is based on learning a set of global and task-specific parameters.
We show that the feature-map transformation based approach outperforms state-of-the-art continual GANs methods.
arXiv Detail & Related papers (2021-03-06T05:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.