Hierarchical fuzzy neural networks with privacy preservation for
heterogeneous big data
- URL: http://arxiv.org/abs/2209.08467v1
- Date: Sun, 18 Sep 2022 03:53:02 GMT
- Title: Hierarchical fuzzy neural networks with privacy preservation for
heterogeneous big data
- Authors: Leijie Zhang, Ye Shi, Yu-Cheng Chang, and Chin-Teng Lin
- Abstract summary: Heterogeneous big data poses many challenges in machine learning.
We propose a privacy-preserving hierarchical fuzzy neural network (PP-HFNN) to address these technical challenges while also alleviating privacy concerns.
The entire training procedure is scalable, fast and does not suffer from gradient vanishing problems like the methods based on back-propagation.
- Score: 29.65840169552303
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Heterogeneous big data poses many challenges in machine learning. Its
enormous scale, high dimensionality, and inherent uncertainty make almost every
aspect of machine learning difficult, from providing enough processing power to
maintaining model accuracy to protecting privacy. However, perhaps the most
imposing problem is that big data is often interspersed with sensitive personal
data. Hence, we propose a privacy-preserving hierarchical fuzzy neural network
(PP-HFNN) to address these technical challenges while also alleviating privacy
concerns. The network is trained with a two-stage optimization algorithm, and
the parameters at low levels of the hierarchy are learned with a scheme based
on the well-known alternating direction method of multipliers, which does not
reveal local data to other agents. Coordination at high levels of the hierarchy
is handled by the alternating optimization method, which converges very
quickly. The entire training procedure is scalable, fast and does not suffer
from gradient vanishing problems like the methods based on back-propagation.
Comprehensive simulations conducted on both regression and classification tasks
demonstrate the effectiveness of the proposed model.
Related papers
- Selective Embedding for Deep Learning [0.4499833362998489]
Deep learning algorithms are sensitive to input data, and performance often deteriorates under nonstationary conditions.<n>This study introduces selective embedding, a novel data loading strategy, which alternates short segments of data from multiple sources within a single input channel.
arXiv Detail & Related papers (2025-07-16T15:45:01Z) - Private Training & Data Generation by Clustering Embeddings [74.00687214400021]
Differential privacy (DP) provides a robust framework for protecting individual data.<n>We introduce a novel principled method for DP synthetic image embedding generation.<n> Empirically, a simple two-layer neural network trained on synthetically generated embeddings achieves state-of-the-art (SOTA) classification accuracy.
arXiv Detail & Related papers (2025-06-20T00:17:14Z) - GDSG: Graph Diffusion-based Solution Generator for Optimization Problems in MEC Networks [109.17835015018532]
We present a Graph Diffusion-based Solution Generation (GDSG) method.
This approach is designed to work with suboptimal datasets while converging to the optimal solution large probably.
We build GDSG as a multi-task diffusion model utilizing a Graph Neural Network (GNN) to acquire the distribution of high-quality solutions.
arXiv Detail & Related papers (2024-12-11T11:13:43Z) - Privacy-Preserving Federated Unsupervised Domain Adaptation for Regression on Small-Scale and High-Dimensional Biological Data [2.699900017799093]
freda is a privacy-preserving federated method for unsupervised domain adaptation in regression tasks.
We evaluate freda on the challenging task of age prediction from DNA methylation data, demonstrating that it achieves performance comparable to the centralized state-of-the-art method.
arXiv Detail & Related papers (2024-11-26T10:19:16Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Precision Machine Learning [5.15188009671301]
We compare various function approximation methods and study how they scale with increasing parameters and data.
We find that neural networks can often outperform classical approximation methods on high-dimensional examples.
We develop training tricks which enable us to train neural networks to extremely low loss, close to the limits allowed by numerical precision.
arXiv Detail & Related papers (2022-10-24T17:58:30Z) - Efficient Logistic Regression with Local Differential Privacy [0.0]
Internet of Things devices are expanding rapidly and generating huge amount of data.
There is an increasing need to explore data collected from these devices.
Collaborative learning provides a strategic solution for the Internet of Things settings but also raises public concern over data privacy.
arXiv Detail & Related papers (2022-02-05T22:44:03Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Sample-based and Feature-based Federated Learning via Mini-batch SSCA [18.11773963976481]
This paper investigates sample-based and feature-based federated optimization.
We show that the proposed algorithms can preserve data privacy through the model aggregation mechanism.
We also show that the proposed algorithms converge to Karush-Kuhn-Tucker points of the respective federated optimization problems.
arXiv Detail & Related papers (2021-04-13T08:23:46Z) - Wide Network Learning with Differential Privacy [7.453881927237143]
Current generation of neural networks suffers significant loss accuracy under most practically relevant privacy training regimes.
We develop a general approach towards training these models that takes advantage of the sparsity of the gradients of private Empirical Minimization (ERM)
Following the same number of parameters, we propose a novel algorithm for privately training neural networks.
arXiv Detail & Related papers (2021-03-01T20:31:50Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Real-time Federated Evolutionary Neural Architecture Search [14.099753950531456]
Federated learning is a distributed machine learning approach to privacy preservation.
We propose an evolutionary approach to real-time federated neural architecture search that not only optimize the model performance but also reduces the local payload.
This way, we effectively reduce computational and communication costs required for evolutionary optimization and avoid big performance fluctuations of the local models.
arXiv Detail & Related papers (2020-03-04T17:03:28Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.