Unsupervised feature selection via self-paced learning and low-redundant
regularization
- URL: http://arxiv.org/abs/2112.07227v1
- Date: Tue, 14 Dec 2021 08:28:19 GMT
- Title: Unsupervised feature selection via self-paced learning and low-redundant
regularization
- Authors: Weiyi Li, Hongmei Chen, Tianrui Li, Jihong Wan, Binbin Sang
- Abstract summary: An unsupervised feature selection is proposed by integrating the framework of self-paced learning and subspace learning.
The convergence of the method is proved theoretically and experimentally.
The experimental results show that the proposed method can improve the performance of clustering methods and outperform other compared algorithms.
- Score: 6.083524716031565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much more attention has been paid to unsupervised feature selection nowadays
due to the emergence of massive unlabeled data. The distribution of samples and
the latent effect of training a learning method using samples in more effective
order need to be considered so as to improve the robustness of the method.
Self-paced learning is an effective method considering the training order of
samples. In this study, an unsupervised feature selection is proposed by
integrating the framework of self-paced learning and subspace learning.
Moreover, the local manifold structure is preserved and the redundancy of
features is constrained by two regularization terms. $L_{2,1/2}$-norm is
applied to the projection matrix, which aims to retain discriminative features
and further alleviate the effect of noise in the data. Then, an iterative
method is presented to solve the optimization problem. The convergence of the
method is proved theoretically and experimentally. The proposed method is
compared with other state of the art algorithms on nine real-world datasets.
The experimental results show that the proposed method can improve the
performance of clustering methods and outperform other compared algorithms.
Related papers
- Symmetry Nonnegative Matrix Factorization Algorithm Based on Self-paced Learning [10.6600050775306]
A symmetric nonnegative matrix factorization algorithm was proposed to improve the clustering performance of the model.
A weight variable that could measure the degree of difficulty to all samples was assigned in this method.
The experimental results showed the effectiveness of the proposed algorithm.
arXiv Detail & Related papers (2024-10-20T06:33:02Z) - Learning A Disentangling Representation For PU Learning [18.94726971543125]
We propose to learn a neural network-based data representation using a loss function that can be used to project the unlabeled data into two clusters.
We conduct experiments on simulated PU data that demonstrate the improved performance of our proposed method compared to the current state-of-the-art approaches.
arXiv Detail & Related papers (2023-10-05T18:33:32Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - Low-rank Dictionary Learning for Unsupervised Feature Selection [11.634317251468968]
We introduce a novel unsupervised feature selection approach by applying dictionary learning ideas in a low-rank representation.
A unified objective function for unsupervised feature selection is proposed in a sparse way by an $ell_2,1$-norm regularization.
Our experimental findings reveal that the proposed method outperforms the state-of-the-art algorithm.
arXiv Detail & Related papers (2021-06-21T13:39:10Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.