Knowledge Federation: A Unified and Hierarchical Privacy-Preserving AI
Framework
- URL: http://arxiv.org/abs/2002.01647v3
- Date: Fri, 22 May 2020 07:34:14 GMT
- Title: Knowledge Federation: A Unified and Hierarchical Privacy-Preserving AI
Framework
- Authors: Hongyu Li, Dan Meng, Hong Wang and Xiaolin Li
- Abstract summary: We propose a comprehensive framework (called Knowledge Federation - KF) to address challenges by enabling AI while preserving data privacy and ownership.
KF consists of four levels of federation: (1) information level, low-level statistics and computation of data, meeting the requirements of simple queries, searching and simplistic operators; (2) model level, supporting training, learning, and inference; (3) cognition level, enabling abstract feature representation at various levels of abstractions and contexts; (4) knowledge level, fusing knowledge discovery, representation, and reasoning.
We have developed a reference implementation of KF, called iBond Platform, to offer a production-quality
- Score: 25.950286526030645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With strict protections and regulations of data privacy and security,
conventional machine learning based on centralized datasets is confronted with
significant challenges, making artificial intelligence (AI) impractical in many
mission-critical and data-sensitive scenarios, such as finance, government, and
health. In the meantime, tremendous datasets are scattered in isolated silos in
various industries, organizations, different units of an organization, or
different branches of an international organization. These valuable data
resources are well underused. To advance AI theories and applications, we
propose a comprehensive framework (called Knowledge Federation - KF) to address
these challenges by enabling AI while preserving data privacy and ownership.
Beyond the concepts of federated learning and secure multi-party computation,
KF consists of four levels of federation: (1) information level, low-level
statistics and computation of data, meeting the requirements of simple queries,
searching and simplistic operators; (2) model level, supporting training,
learning, and inference; (3) cognition level, enabling abstract feature
representation at various levels of abstractions and contexts; (4) knowledge
level, fusing knowledge discovery, representation, and reasoning. We further
clarify the relationship and differentiation between knowledge federation and
other related research areas. We have developed a reference implementation of
KF, called iBond Platform, to offer a production-quality KF platform to enable
industrial applications in finance, insurance et al. The iBond platform will
also help establish the KF community and a comprehensive ecosystem and usher in
a novel paradigm shift towards secure, privacy-preserving and responsible AI.
As far as we know, knowledge federation is the first hierarchical and unified
framework for secure multi-party computing and learning.
Related papers
- Private Knowledge Sharing in Distributed Learning: A Survey [50.51431815732716]
The rise of Artificial Intelligence has revolutionized numerous industries and transformed the way society operates.
It is crucial to utilize information in learning processes that are either distributed or owned by different entities.
Modern data-driven services have been developed to integrate distributed knowledge entities into their outcomes.
arXiv Detail & Related papers (2024-02-08T07:18:23Z) - Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance [14.941040909919327]
Distributed AI systems are revolutionizing big data computing and data processing capabilities with growing economic and societal impact.
Recent studies have identified new attack surfaces and risks caused by security, privacy, and fairness issues in AI systems.
We review representative techniques, algorithms, and theoretical foundations for trustworthy distributed AI.
arXiv Detail & Related papers (2024-02-02T01:58:58Z) - Federated Learning: Organizational Opportunities, Challenges, and
Adoption Strategies [39.58317527488534]
Federated learning allows distributed clients to train models collaboratively without the need to share their respective training data with others.
We argue that federated learning presents organizational challenges with ample interdisciplinary opportunities for information systems researchers.
arXiv Detail & Related papers (2023-08-04T09:23:23Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Practical Vertical Federated Learning with Unsupervised Representation
Learning [47.77625754666018]
Federated learning enables multiple parties to collaboratively train a machine learning model without sharing their raw data.
We propose a novel communication-efficient vertical federated learning algorithm named FedOnce, which requires only one-shot communication among parties.
Our privacy-preserving technique significantly outperforms the state-of-the-art approaches under the same privacy budget.
arXiv Detail & Related papers (2022-08-13T08:41:32Z) - APPFLChain: A Privacy Protection Distributed Artificial-Intelligence
Architecture Based on Federated Learning and Consortium Blockchain [6.054775780656853]
We propose a new system architecture called APPFLChain.
It is an integrated architecture of a Hyperledger Fabric-based blockchain and a federated-learning paradigm.
Our new system can maintain a high degree of security and privacy as users do not need to share sensitive personal information to the server.
arXiv Detail & Related papers (2022-06-26T05:30:07Z) - Federated Learning: Balancing the Thin Line Between Data Intelligence
and Privacy [0.0]
Federated learning holds great promise in learning from fragmented sensitive data.
This article provides a systematic overview and detailed taxonomy of federated learning.
We investigate the existing security challenges in federated learning and provide an overview of established defense techniques for data poisoning, inference attacks, and model poisoning attacks.
arXiv Detail & Related papers (2022-04-22T23:39:16Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Technologies for Trustworthy Machine Learning: A Survey in a
Socio-Technical Context [4.866589122417036]
We argue that four categories of system properties are instrumental in achieving the policy objectives, namely fairness, explainability, auditability and safety & security (FEAS)
We discuss how these properties need to be considered across all stages of the machine learning life cycle, from data collection through run-time model inference.
We conclude with an identification of open research problems, with a particular focus on the connection between trustworthy machine learning technologies and their implications for individuals and society.
arXiv Detail & Related papers (2020-07-17T11:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.