A Framework for Verifiable and Auditable Federated Anomaly Detection
- URL: http://arxiv.org/abs/2203.07802v1
- Date: Tue, 15 Mar 2022 11:34:02 GMT
- Title: A Framework for Verifiable and Auditable Federated Anomaly Detection
- Authors: Gabriele Santin and Inna Skarbovsky and Fabiana Fournier and Bruno
Lepri
- Abstract summary: Federated Leaning is an emerging approach to manage cooperation between a group of agents for the solution of Machine Learning tasks.
We present a novel algorithmic architecture that tackle this problem in the particular case of Anomaly Detection.
- Score: 3.639790324866155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Leaning is an emerging approach to manage cooperation between a
group of agents for the solution of Machine Learning tasks, with the goal of
improving each agent's performance without disclosing any data. In this paper
we present a novel algorithmic architecture that tackle this problem in the
particular case of Anomaly Detection (or classification or rare events), a
setting where typical applications often comprise data with sensible
information, but where the scarcity of anomalous examples encourages
collaboration. We show how Random Forests can be used as a tool for the
development of accurate classifiers with an effective insight-sharing mechanism
that does not break the data integrity. Moreover, we explain how the new
architecture can be readily integrated in a blockchain infrastructure to ensure
the verifiable and auditable execution of the algorithm. Furthermore, we
discuss how this work may set the basis for a more general approach for the
design of federated ensemble-learning methods beyond the specific task and
architecture discussed in this paper.
Related papers
- An Empirical Evaluation of Federated Contextual Bandit Algorithms [27.275089644378376]
Federated learning can be done using implicit signals generated as users interact with applications of interest.
We develop variants of prominent contextual bandit algorithms from the centralized seting for the federated setting.
Our experiments reveal the surprising effectiveness of the simple and commonly used softmax in balancing the well-know exploration-exploitation tradeoff.
arXiv Detail & Related papers (2023-03-17T19:22:30Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - RACA: Relation-Aware Credit Assignment for Ad-Hoc Cooperation in
Multi-Agent Deep Reinforcement Learning [55.55009081609396]
We propose a novel method, called Relation-Aware Credit Assignment (RACA), which achieves zero-shot generalization in ad-hoc cooperation scenarios.
RACA takes advantage of a graph-based encoder relation to encode the topological structure between agents.
Our method outperforms baseline methods on the StarCraftII micromanagement benchmark and ad-hoc cooperation scenarios.
arXiv Detail & Related papers (2022-06-02T03:39:27Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Unsupervised collaborative learning using privileged information [0.0]
This article is dedicated to collaborative clustering based on the Learning Using Privileged Information paradigm.
A comparison between our algorithm and state of the art implementations shows improvement of the collaboration process using the proposed approach.
arXiv Detail & Related papers (2021-03-24T12:43:49Z) - Neural Architecture Search From Task Similarity Measure [28.5184196829547]
We propose a neural architecture search framework based on a similarity measure between various tasks defined in terms of Fisher information.
By utilizing the relation between a target and a set of existing tasks, the search space of architectures can be significantly reduced.
arXiv Detail & Related papers (2021-02-27T15:26:14Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Heterogeneous Federated Learning [41.04946606973614]
Federated learning learns from scattered data by fusing collaborative models from local nodes.
Due to chaotic information distribution, the model fusion may suffer from structural misalignment with regard to unmatched parameters.
We propose a novel federated learning framework to establish a firm structure-information alignment across collaborative models.
arXiv Detail & Related papers (2020-08-15T19:06:59Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.