Fast Ensemble Learning Using Adversarially-Generated Restricted
Boltzmann Machines
- URL: http://arxiv.org/abs/2101.01042v1
- Date: Mon, 4 Jan 2021 16:00:47 GMT
- Title: Fast Ensemble Learning Using Adversarially-Generated Restricted
Boltzmann Machines
- Authors: Gustavo H. de Rosa, Mateus Roder, Jo\~ao P. Papa
- Abstract summary: Restricted Boltzmann Machine (RBM) has received recent attention and relies on an energy-based structure to model data probability distributions.
This work proposes to artificially generate RBMs using Adversarial Learning, where pre-trained weight matrices serve as the GAN inputs.
Experimental results demonstrate the suitability of the proposed approach under image reconstruction and image classification tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning has been applied in a wide range of tasks throughout the
last years, ranging from image classification to autonomous driving and natural
language processing. Restricted Boltzmann Machine (RBM) has received recent
attention and relies on an energy-based structure to model data probability
distributions. Notwithstanding, such a technique is susceptible to adversarial
manipulation, i.e., slightly or profoundly modified data. An alternative to
overcome the adversarial problem lies in the Generative Adversarial Networks
(GAN), capable of modeling data distributions and generating adversarial data
that resemble the original ones. Therefore, this work proposes to artificially
generate RBMs using Adversarial Learning, where pre-trained weight matrices
serve as the GAN inputs. Furthermore, it proposes to sample copious amounts of
matrices and combine them into ensembles, alleviating the burden of training
new models'. Experimental results demonstrate the suitability of the proposed
approach under image reconstruction and image classification tasks, and
describe how artificial-based ensembles are alternatives to pre-training vast
amounts of RBMs.
Related papers
- Model Integrity when Unlearning with T2I Diffusion Models [11.321968363411145]
We propose approximate Machine Unlearning algorithms to reduce the generation of specific types of images, characterized by samples from a forget distribution''
We then propose unlearning algorithms that demonstrate superior effectiveness in preserving model integrity compared to existing baselines.
arXiv Detail & Related papers (2024-11-04T13:15:28Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models [70.60381055741391]
Image restoration challenges related to illposed problems, resulting in deviations between single model predictions and ground-truths.
Ensemble learning aims to address these deviations by combining the predictions of multiple base models.
We employ an expectation (EM)-based algorithm to estimate ensemble weights for prediction candidates.
Our algorithm is model-agnostic and training-free, allowing seamless integration and enhancement of various pre-trained image restoration models.
arXiv Detail & Related papers (2024-10-30T12:16:35Z) - Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Federated Learning for Misbehaviour Detection with Variational Autoencoders and Gaussian Mixture Models [0.2999888908665658]
Federated Learning (FL) has become an attractive approach to collaboratively train Machine Learning (ML) models.
This work proposes a novel unsupervised FL approach for the identification of potential misbehavior in vehicular environments.
We leverage the computing capabilities of public cloud services for model aggregation purposes.
arXiv Detail & Related papers (2024-05-16T08:49:50Z) - Generative Adversarial Reduced Order Modelling [0.0]
We present GAROM, a new approach for reduced order modelling (ROM) based on generative adversarial networks (GANs)
GANs have the potential to learn data distribution and generate more realistic data.
In this work, we combine the GAN and ROM framework, by introducing a data-driven generative adversarial model able to learn solutions to parametric differential equations.
arXiv Detail & Related papers (2023-05-25T09:23:33Z) - Fitting a Directional Microstructure Model to Diffusion-Relaxation MRI
Data with Self-Supervised Machine Learning [2.8167227950959206]
Self-supervised machine learning is emerging as an attractive alternative to supervised learning.
In this paper, we demonstrate self-supervised machine learning model fitting for a directional microstructural model.
Our approach shows clear improvements in parameter estimation and computational time, compared to standard non-linear least squares fitting.
arXiv Detail & Related papers (2022-10-05T15:51:39Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Reconstructing Training Data from Diverse ML Models by Ensemble
Inversion [8.414622657659168]
Model Inversion (MI), in which an adversary abuses access to a trained Machine Learning (ML) model, has attracted increasing research attention.
We propose an ensemble inversion technique that estimates the distribution of original training data by training a generator constrained by an ensemble of trained models.
We achieve high quality results without any dataset and show how utilizing an auxiliary dataset that's similar to the presumed training data improves the results.
arXiv Detail & Related papers (2021-11-05T18:59:01Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.