Unsupervised Generative Feature Transformation via Graph Contrastive Pre-training and Multi-objective Fine-tuning
- URL: http://arxiv.org/abs/2405.16879v1
- Date: Mon, 27 May 2024 06:50:00 GMT
- Title: Unsupervised Generative Feature Transformation via Graph Contrastive Pre-training and Multi-objective Fine-tuning
- Authors: Wangyang Ying, Dongjie Wang, Xuanming Hu, Yuanchun Zhou, Charu C. Aggarwal, Yanjie Fu,
- Abstract summary: We develop a measurement-pretrain-finetune paradigm for Unsupervised Feature Transformation Learning.
For unsupervised feature set utility measurement, we propose a feature value consistency preservation perspective.
For generative transformation finetuning, we regard a feature set as a feature cross sequence and feature transformation as sequential generation.
- Score: 28.673952870674146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature transformation is to derive a new feature set from original features to augment the AI power of data. In many science domains such as material performance screening, while feature transformation can model material formula interactions and compositions and discover performance drivers, supervised labels are collected from expensive and lengthy experiments. This issue motivates an Unsupervised Feature Transformation Learning (UFTL) problem. Prior literature, such as manual transformation, supervised feedback guided search, and PCA, either relies on domain knowledge or expensive supervised feedback, or suffers from large search space, or overlooks non-linear feature-feature interactions. UFTL imposes a major challenge on existing methods: how to design a new unsupervised paradigm that captures complex feature interactions and avoids large search space? To fill this gap, we connect graph, contrastive, and generative learning to develop a measurement-pretrain-finetune paradigm for UFTL. For unsupervised feature set utility measurement, we propose a feature value consistency preservation perspective and develop a mean discounted cumulative gain like unsupervised metric to evaluate feature set utility. For unsupervised feature set representation pretraining, we regard a feature set as a feature-feature interaction graph, and develop an unsupervised graph contrastive learning encoder to embed feature sets into vectors. For generative transformation finetuning, we regard a feature set as a feature cross sequence and feature transformation as sequential generation. We develop a deep generative feature transformation model that coordinates the pretrained feature set encoder and the gradient information extracted from a feature set utility evaluator to optimize a transformed feature generator.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - CFPFormer: Feature-pyramid like Transformer Decoder for Segmentation and Detection [1.837431956557716]
Feature pyramids have been widely adopted in convolutional neural networks (CNNs) and transformers for tasks like medical image segmentation and object detection.
We propose a novel decoder block that integrates feature pyramids and transformers.
Our model achieves superior performance in detecting small objects compared to existing methods.
arXiv Detail & Related papers (2024-04-23T18:46:07Z) - In-Context Convergence of Transformers [63.04956160537308]
We study the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent.
For data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process.
arXiv Detail & Related papers (2023-10-08T17:55:33Z) - Feature Interaction Aware Automated Data Representation Transformation [27.26916497306978]
We develop a hierarchical reinforcement learning structure with cascading Markov Decision Processes to automate feature and operation selection.
We reward agents based on the interaction strength between selected features, resulting in intelligent and efficient exploration of the feature space that emulates human decision-making.
arXiv Detail & Related papers (2023-09-29T06:48:16Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - Learning Diverse Features in Vision Transformers for Improved
Generalization [15.905065768434403]
We show that vision transformers (ViTs) tend to extract robust and spurious features with distinct attention heads.
As a result of this modularity, their performance under distribution shifts can be significantly improved at test time.
We propose a method to further enhance the diversity and complement of the learned features by encouragingarity of the attention heads' input gradients.
arXiv Detail & Related papers (2023-08-30T19:04:34Z) - Traceable Automatic Feature Transformation via Cascading Actor-Critic
Agents [25.139229855367088]
Feature transformation is an essential task to boost the effectiveness and interpretability of machine learning (ML)
We formulate the feature transformation task as an iterative, nested process of feature generation and selection.
We show 24.7% improvements in F1 scores compared with SOTAs and robustness in high-dimensional data.
arXiv Detail & Related papers (2022-12-27T08:20:19Z) - Group-wise Reinforcement Feature Generation for Optimal and Explainable
Representation Space Reconstruction [25.604176830832586]
We reformulate representation space reconstruction into an interactive process of nested feature generation and selection.
We design a group-wise generation strategy to cross a feature group, an operation, and another feature group to generate new features.
We present extensive experiments to demonstrate the effectiveness, efficiency, traceability, and explicitness of our system.
arXiv Detail & Related papers (2022-05-28T21:34:14Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Progressive Self-Guided Loss for Salient Object Detection [102.35488902433896]
We present a progressive self-guided loss function to facilitate deep learning-based salient object detection in images.
Our framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively.
arXiv Detail & Related papers (2021-01-07T07:33:38Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.