Federated Representation Learning via Maximal Coding Rate Reduction
- URL: http://arxiv.org/abs/2210.00299v1
- Date: Sat, 1 Oct 2022 15:43:51 GMT
- Title: Federated Representation Learning via Maximal Coding Rate Reduction
- Authors: Juan Cervino, Navid NaderiAlizadeh, and Alejandro Ribeiro
- Abstract summary: We propose a methodology to learn low-dimensional representations from a dataset that is distributed among several clients.
Our proposed method, which we refer to as FLOW, utilizes MCR2 as the objective of choice, hence resulting in representations that are both between-class discriminative and within-class compressible.
- Score: 109.26332878050374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a federated methodology to learn low-dimensional representations
from a dataset that is distributed among several clients. In particular, we
move away from the commonly-used cross-entropy loss in federated learning, and
seek to learn shared low-dimensional representations of the data in a
decentralized manner via the principle of maximal coding rate reduction (MCR2).
Our proposed method, which we refer to as FLOW, utilizes MCR2 as the objective
of choice, hence resulting in representations that are both between-class
discriminative and within-class compressible. We theoretically show that our
distributed algorithm achieves a first-order stationary point. Moreover, we
demonstrate, via numerical experiments, the utility of the learned
low-dimensional representations.
Related papers
- Learning A Disentangling Representation For PU Learning [18.94726971543125]
We propose to learn a neural network-based data representation using a loss function that can be used to project the unlabeled data into two clusters.
We conduct experiments on simulated PU data that demonstrate the improved performance of our proposed method compared to the current state-of-the-art approaches.
arXiv Detail & Related papers (2023-10-05T18:33:32Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Deep Graph Clustering via Dual Correlation Reduction [37.973072977988494]
We propose a novel self-supervised deep graph clustering method termed Dual Correlation Reduction Network (DCRN)
In our method, we first design a siamese network to encode samples. Then by forcing the cross-view sample correlation matrix and cross-view feature correlation matrix to approximate two identity matrices, respectively, we reduce the information correlation in the dual-level.
In order to alleviate representation collapse caused by over-smoothing in GCN, we introduce a propagation regularization term to enable the network to gain long-distance information.
arXiv Detail & Related papers (2021-12-29T04:05:38Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Asymptotics of Network Embeddings Learned via Subsampling [4.23373349945751]
We study representation methods using a subsampling approach, such as node2vec, into a single unifying framework.
This provides a theoretical foundation to understand what the embedding vectors represent and how well these methods perform on downstream tasks.
Notably, we observe that typically used loss functions may lead to shortcomings, such as a lack of Fisher consistency.
arXiv Detail & Related papers (2021-07-06T02:54:53Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Robust Locality-Aware Regression for Labeled Data Classification [5.432221650286726]
We propose a new discriminant feature extraction framework, namely Robust Locality-Aware Regression (RLAR)
In our model, we introduce a retargeted regression to perform the marginal representation learning adaptively instead of using the general average inter-class margin.
To alleviate the disturbance of outliers and prevent overfitting, we measure the regression term and locality-aware term together with the regularization term by the L2,1 norm.
arXiv Detail & Related papers (2020-06-15T11:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.