Traceable Group-Wise Self-Optimizing Feature Transformation Learning: A
Dual Optimization Perspective
- URL: http://arxiv.org/abs/2306.16893v1
- Date: Thu, 29 Jun 2023 12:29:21 GMT
- Title: Traceable Group-Wise Self-Optimizing Feature Transformation Learning: A
Dual Optimization Perspective
- Authors: Meng Xiao, Dongjie Wang, Min Wu, Kunpeng Liu, Hui Xiong, Yuanchun
Zhou, Yanjie Fu
- Abstract summary: Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features.
Existing research predominantly focuses on domain knowledge-based feature engineering or learning latent representations.
Our initial work took a pioneering step towards this challenge by introducing a novel self-optimizing framework.
- Score: 33.45878576396101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature transformation aims to reconstruct an effective representation space
by mathematically refining the existing features. It serves as a pivotal
approach to combat the curse of dimensionality, enhance model generalization,
mitigate data sparsity, and extend the applicability of classical models.
Existing research predominantly focuses on domain knowledge-based feature
engineering or learning latent representations. However, these methods, while
insightful, lack full automation and fail to yield a traceable and optimal
representation space. An indispensable question arises: Can we concurrently
address these limitations when reconstructing a feature space for a
machine-learning task? Our initial work took a pioneering step towards this
challenge by introducing a novel self-optimizing framework. This framework
leverages the power of three cascading reinforced agents to automatically
select candidate features and operations for generating improved feature
transformation combinations. Despite the impressive strides made, there was
room for enhancing its effectiveness and generalization capability. In this
extended journal version, we advance our initial work from two distinct yet
interconnected perspectives: 1) We propose a refinement of the original
framework, which integrates a graph-based state representation method to
capture the feature interactions more effectively and develop different
Q-learning strategies to alleviate Q-value overestimation further. 2) We
utilize a new optimization technique (actor-critic) to train the entire
self-optimizing framework in order to accelerate the model convergence and
improve the feature transformation performance. Finally, to validate the
improved effectiveness and generalization capability of our framework, we
perform extensive experiments and conduct comprehensive analyses.
Related papers
- Reinforcement Feature Transformation for Polymer Property Performance Prediction [22.87577374767465]
Existing machine learning models face challenges in effectively learning polymer representations due to low-quality polymer datasets.
This study focuses on improving polymer property performance prediction tasks by reconstructing an optimal and explainable descriptor representation space.
arXiv Detail & Related papers (2024-09-23T23:42:18Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Enhancing Tabular Data Optimization with a Flexible Graph-based Reinforced Exploration Strategy [16.782884097690882]
Current frameworks for automated feature transformation rely on iterative sequence generation tasks.
Three cascading agents iteratively select nodes and idea mathematical operations to generate new transformation states.
This strategy leverages the inherent properties of the graph structure, allowing for the preservation and reuse of valuable transformations.
arXiv Detail & Related papers (2024-06-11T16:10:37Z) - Feature Interaction Aware Automated Data Representation Transformation [27.26916497306978]
We develop a hierarchical reinforcement learning structure with cascading Markov Decision Processes to automate feature and operation selection.
We reward agents based on the interaction strength between selected features, resulting in intelligent and efficient exploration of the feature space that emulates human decision-making.
arXiv Detail & Related papers (2023-09-29T06:48:16Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Self-Optimizing Feature Transformation [33.458785763961004]
Feature transformation aims to extract a good representation (feature) space by mathematically transforming existing features.
Current research focuses on domain knowledge-based feature engineering or learning latent representations.
We present a self-optimizing framework for feature transformation.
arXiv Detail & Related papers (2022-09-16T16:50:41Z) - Group-wise Reinforcement Feature Generation for Optimal and Explainable
Representation Space Reconstruction [25.604176830832586]
We reformulate representation space reconstruction into an interactive process of nested feature generation and selection.
We design a group-wise generation strategy to cross a feature group, an operation, and another feature group to generate new features.
We present extensive experiments to demonstrate the effectiveness, efficiency, traceability, and explicitness of our system.
arXiv Detail & Related papers (2022-05-28T21:34:14Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Learning Rich Nearest Neighbor Representations from Self-supervised
Ensembles [60.97922557957857]
We provide a framework to perform self-supervised model ensembling via a novel method of learning representations directly through gradient descent at inference time.
This technique improves representation quality, as measured by k-nearest neighbors, both on the in-domain dataset and in the transfer setting.
arXiv Detail & Related papers (2021-10-19T22:24:57Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.