Distributed Semi-supervised Fuzzy Regression with Interpolation
Consistency Regularization
- URL: http://arxiv.org/abs/2209.09240v1
- Date: Sun, 18 Sep 2022 04:46:51 GMT
- Title: Distributed Semi-supervised Fuzzy Regression with Interpolation
Consistency Regularization
- Authors: Ye Shi, Leijie Zhang, Zehong Cao, and M. Tanveer, Chin-Teng Lin
- Abstract summary: We propose a distributed semi-supervised fuzzy regression (DSFR) model with fuzzy if-then rules and consistency regularization (ICR)
Experiments results on both artificial and real-world datasets show that the proposed DSFR model can achieve much better performance than the state-of-the-art DSSL algorithm.
- Score: 38.16335448831723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, distributed semi-supervised learning (DSSL) algorithms have shown
their effectiveness in leveraging unlabeled samples over interconnected
networks, where agents cannot share their original data with each other and can
only communicate non-sensitive information with their neighbors. However,
existing DSSL algorithms cannot cope with data uncertainties and may suffer
from high computation and communication overhead problems. To handle these
issues, we propose a distributed semi-supervised fuzzy regression (DSFR) model
with fuzzy if-then rules and interpolation consistency regularization (ICR).
The ICR, which was proposed recently for semi-supervised problem, can force
decision boundaries to pass through sparse data areas, thus increasing model
robustness. However, its application in distributed scenarios has not been
considered yet. In this work, we proposed a distributed Fuzzy C-means (DFCM)
method and a distributed interpolation consistency regularization (DICR) built
on the well-known alternating direction method of multipliers to respectively
locate parameters in antecedent and consequent components of DSFR. Notably, the
DSFR model converges very fast since it does not involve back-propagation
procedure and is scalable to large-scale datasets benefiting from the
utilization of DFCM and DICR. Experiments results on both artificial and
real-world datasets show that the proposed DSFR model can achieve much better
performance than the state-of-the-art DSSL algorithm in terms of both loss
value and computational cost.
Related papers
- Decentralized Smoothing ADMM for Quantile Regression with Non-Convex Sparse Penalties [3.269165283595478]
In the rapidly evolving internet-of-things (IoT) ecosystem, effective data analysis techniques are crucial for handling distributed data generated by sensors.
Addressing the limitations of existing methods, such as the sub-gradient consensus approach, which fails to distinguish between active and non-active coefficients.
arXiv Detail & Related papers (2024-08-02T15:00:04Z) - Scalable and reliable deep transfer learning for intelligent fault
detection via multi-scale neural processes embedded with knowledge [7.730457774728478]
This paper proposes a novel DTL-based deep transfer learning method known as Neural Processes-based deep transfer learning with graph convolution network (GTNP)
The validation of the proposed method is conducted across 3 IFD tasks, consistently showing the superior detection performance of GTNP compared to the other DTL-based methods.
arXiv Detail & Related papers (2024-02-20T05:39:32Z) - Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer [60.31021888394358]
Unsupervised Domain Adaptation (UDA) can effectively address domain gap issues in real-world image Super-Resolution (SR)
We propose a SOurce-free Domain Adaptation framework for image SR (SODA-SR) to address this issue, i.e., adapt a source-trained model to a target domain with only unlabeled target data.
arXiv Detail & Related papers (2023-03-31T03:14:44Z) - FedAgg: Adaptive Federated Learning with Aggregated Gradients [1.5653612447564105]
We propose an adaptive FEDerated learning algorithm called FedAgg to alleviate the divergence between the local and average model parameters and obtain a fast model convergence rate.
We show that our framework is superior to existing state-of-the-art FL strategies for enhancing model performance and accelerating convergence rate under IID and Non-IID datasets.
arXiv Detail & Related papers (2023-03-28T08:07:28Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Unsupervised Domain Adaptation in the Dissimilarity Space for Person
Re-identification [11.045405206338486]
We propose a novel Dissimilarity-based Maximum Mean Discrepancy (D-MMD) loss for aligning pair-wise distances.
Empirical results with three challenging benchmark datasets show that the proposed D-MMD loss decreases as source and domain distributions become more similar.
arXiv Detail & Related papers (2020-07-27T22:10:46Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Distributionally Robust Chance Constrained Programming with Generative
Adversarial Networks (GANs) [0.0]
A novel generative adversarial network (GAN) based data-driven distributionally robust chance constrained programming framework is proposed.
GAN is applied to fully extract distributional information from historical data in a nonparametric and unsupervised way.
The proposed framework is then applied to supply chain optimization under demand uncertainty.
arXiv Detail & Related papers (2020-02-28T00:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.